Test Report: KVM_Linux_crio 18588

                    
                      801f50a102c40cfdc9fc79f6fcbe1cefa0ef9ea3:2024-04-08:33935
                    
                

Test fail (29/325)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.08
47 TestAddons/parallel/LocalPath 13.35
53 TestAddons/StoppedEnableDisable 154.51
172 TestMultiControlPlane/serial/StopSecondaryNode 142.14
174 TestMultiControlPlane/serial/RestartSecondaryNode 59.52
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 363.45
179 TestMultiControlPlane/serial/StopCluster 142.33
239 TestMultiNode/serial/RestartKeepsNodes 305.29
241 TestMultiNode/serial/StopMultiNode 141.64
248 TestPreload 316.43
256 TestKubernetesUpgrade 420.3
336 TestStartStop/group/old-k8s-version/serial/FirstStart 275.38
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.11
357 TestStartStop/group/no-preload/serial/Stop 139.1
359 TestStartStop/group/embed-certs/serial/Stop 139.19
360 TestStartStop/group/old-k8s-version/serial/DeployApp 0.53
361 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 70.32
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
363 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
365 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
370 TestStartStop/group/old-k8s-version/serial/SecondStart 800.27
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.45
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.47
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.4
374 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.59
375 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 366.74
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 360.87
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 342.72
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 108.93
x
+
TestAddons/parallel/Ingress (154.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-825010 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-825010 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-825010 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8947abd8-1bf9-4645-ab59-b86e4e1fd8f3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8947abd8-1bf9-4645-ab59-b86e4e1fd8f3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004771149s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-825010 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.822566724s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-825010 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.221
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-825010 addons disable ingress --alsologtostderr -v=1: (7.811156737s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-825010 -n addons-825010
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-825010 logs -n 25: (1.438929285s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-531329                                                                     | download-only-531329 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC | 08 Apr 24 11:21 UTC |
	| delete  | -p download-only-750624                                                                     | download-only-750624 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC | 08 Apr 24 11:21 UTC |
	| delete  | -p download-only-879549                                                                     | download-only-879549 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC | 08 Apr 24 11:21 UTC |
	| delete  | -p download-only-531329                                                                     | download-only-531329 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC | 08 Apr 24 11:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-068480 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC |                     |
	|         | binary-mirror-068480                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:42539                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-068480                                                                     | binary-mirror-068480 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC | 08 Apr 24 11:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC |                     |
	|         | addons-825010                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC |                     |
	|         | addons-825010                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-825010 --wait=true                                                                | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:21 UTC | 08 Apr 24 11:23 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:23 UTC | 08 Apr 24 11:23 UTC |
	|         | addons-825010                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-825010 ssh cat                                                                       | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:23 UTC | 08 Apr 24 11:23 UTC |
	|         | /opt/local-path-provisioner/pvc-7d75c78d-eccc-423f-92dd-5653fcb66ade_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-825010 addons disable                                                                | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:23 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:23 UTC | 08 Apr 24 11:23 UTC |
	|         | -p addons-825010                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-825010 ip                                                                            | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:24 UTC | 08 Apr 24 11:24 UTC |
	| addons  | addons-825010 addons disable                                                                | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:24 UTC | 08 Apr 24 11:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:24 UTC | 08 Apr 24 11:24 UTC |
	|         | -p addons-825010                                                                            |                      |         |                |                     |                     |
	| addons  | addons-825010 addons                                                                        | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:24 UTC | 08 Apr 24 11:24 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:24 UTC | 08 Apr 24 11:24 UTC |
	|         | addons-825010                                                                               |                      |         |                |                     |                     |
	| addons  | addons-825010 addons disable                                                                | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:24 UTC | 08 Apr 24 11:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ssh     | addons-825010 ssh curl -s                                                                   | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:24 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| addons  | addons-825010 addons                                                                        | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:25 UTC | 08 Apr 24 11:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-825010 addons                                                                        | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:25 UTC | 08 Apr 24 11:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-825010 ip                                                                            | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:26 UTC | 08 Apr 24 11:26 UTC |
	| addons  | addons-825010 addons disable                                                                | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:26 UTC | 08 Apr 24 11:26 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-825010 addons disable                                                                | addons-825010        | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:26 UTC | 08 Apr 24 11:26 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:21:13
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:21:13.490012  376679 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:21:13.490312  376679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:21:13.490324  376679 out.go:304] Setting ErrFile to fd 2...
	I0408 11:21:13.490328  376679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:21:13.490602  376679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:21:13.491330  376679 out.go:298] Setting JSON to false
	I0408 11:21:13.492409  376679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3817,"bootTime":1712571457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:21:13.492484  376679 start.go:139] virtualization: kvm guest
	I0408 11:21:13.494907  376679 out.go:177] * [addons-825010] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:21:13.496499  376679 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:21:13.496433  376679 notify.go:220] Checking for updates...
	I0408 11:21:13.497809  376679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:21:13.499571  376679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:21:13.501231  376679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:21:13.502707  376679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:21:13.504123  376679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:21:13.505599  376679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:21:13.537571  376679 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 11:21:13.539034  376679 start.go:297] selected driver: kvm2
	I0408 11:21:13.539048  376679 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:21:13.539061  376679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:21:13.539903  376679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:21:13.540009  376679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:21:13.555593  376679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:21:13.555654  376679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:21:13.555945  376679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:21:13.556018  376679 cni.go:84] Creating CNI manager for ""
	I0408 11:21:13.556032  376679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 11:21:13.556040  376679 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:21:13.556099  376679 start.go:340] cluster config:
	{Name:addons-825010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-825010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:21:13.556201  376679 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:21:13.558114  376679 out.go:177] * Starting "addons-825010" primary control-plane node in "addons-825010" cluster
	I0408 11:21:13.559868  376679 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:21:13.559938  376679 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 11:21:13.559955  376679 cache.go:56] Caching tarball of preloaded images
	I0408 11:21:13.560048  376679 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:21:13.560060  376679 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:21:13.560371  376679 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/config.json ...
	I0408 11:21:13.560394  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/config.json: {Name:mka42bd734f3beda7e9c926a450223a680ffed75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:13.560566  376679 start.go:360] acquireMachinesLock for addons-825010: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:21:13.560636  376679 start.go:364] duration metric: took 52.651µs to acquireMachinesLock for "addons-825010"
	I0408 11:21:13.560660  376679 start.go:93] Provisioning new machine with config: &{Name:addons-825010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-825010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:21:13.560727  376679 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 11:21:13.562500  376679 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0408 11:21:13.562660  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:21:13.562701  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:21:13.577463  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38779
	I0408 11:21:13.577977  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:21:13.578702  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:21:13.578726  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:21:13.579084  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:21:13.579319  376679 main.go:141] libmachine: (addons-825010) Calling .GetMachineName
	I0408 11:21:13.579509  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:21:13.579736  376679 start.go:159] libmachine.API.Create for "addons-825010" (driver="kvm2")
	I0408 11:21:13.579768  376679 client.go:168] LocalClient.Create starting
	I0408 11:21:13.579811  376679 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 11:21:13.818987  376679 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 11:21:13.898642  376679 main.go:141] libmachine: Running pre-create checks...
	I0408 11:21:13.898668  376679 main.go:141] libmachine: (addons-825010) Calling .PreCreateCheck
	I0408 11:21:13.899191  376679 main.go:141] libmachine: (addons-825010) Calling .GetConfigRaw
	I0408 11:21:13.899783  376679 main.go:141] libmachine: Creating machine...
	I0408 11:21:13.899801  376679 main.go:141] libmachine: (addons-825010) Calling .Create
	I0408 11:21:13.899998  376679 main.go:141] libmachine: (addons-825010) Creating KVM machine...
	I0408 11:21:13.901265  376679 main.go:141] libmachine: (addons-825010) DBG | found existing default KVM network
	I0408 11:21:13.902105  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:13.901938  376701 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0408 11:21:13.902147  376679 main.go:141] libmachine: (addons-825010) DBG | created network xml: 
	I0408 11:21:13.902171  376679 main.go:141] libmachine: (addons-825010) DBG | <network>
	I0408 11:21:13.902178  376679 main.go:141] libmachine: (addons-825010) DBG |   <name>mk-addons-825010</name>
	I0408 11:21:13.902183  376679 main.go:141] libmachine: (addons-825010) DBG |   <dns enable='no'/>
	I0408 11:21:13.902192  376679 main.go:141] libmachine: (addons-825010) DBG |   
	I0408 11:21:13.902198  376679 main.go:141] libmachine: (addons-825010) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 11:21:13.902207  376679 main.go:141] libmachine: (addons-825010) DBG |     <dhcp>
	I0408 11:21:13.902213  376679 main.go:141] libmachine: (addons-825010) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 11:21:13.902222  376679 main.go:141] libmachine: (addons-825010) DBG |     </dhcp>
	I0408 11:21:13.902226  376679 main.go:141] libmachine: (addons-825010) DBG |   </ip>
	I0408 11:21:13.902236  376679 main.go:141] libmachine: (addons-825010) DBG |   
	I0408 11:21:13.902247  376679 main.go:141] libmachine: (addons-825010) DBG | </network>
	I0408 11:21:13.902334  376679 main.go:141] libmachine: (addons-825010) DBG | 
	I0408 11:21:13.908075  376679 main.go:141] libmachine: (addons-825010) DBG | trying to create private KVM network mk-addons-825010 192.168.39.0/24...
	I0408 11:21:13.974500  376679 main.go:141] libmachine: (addons-825010) DBG | private KVM network mk-addons-825010 192.168.39.0/24 created
	I0408 11:21:13.974536  376679 main.go:141] libmachine: (addons-825010) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010 ...
	I0408 11:21:13.974571  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:13.974444  376701 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:21:13.974588  376679 main.go:141] libmachine: (addons-825010) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:21:13.974616  376679 main.go:141] libmachine: (addons-825010) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 11:21:14.240241  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:14.240107  376701 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa...
	I0408 11:21:14.339581  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:14.339372  376701 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/addons-825010.rawdisk...
	I0408 11:21:14.339617  376679 main.go:141] libmachine: (addons-825010) DBG | Writing magic tar header
	I0408 11:21:14.339632  376679 main.go:141] libmachine: (addons-825010) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010 (perms=drwx------)
	I0408 11:21:14.339673  376679 main.go:141] libmachine: (addons-825010) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 11:21:14.339708  376679 main.go:141] libmachine: (addons-825010) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 11:21:14.339717  376679 main.go:141] libmachine: (addons-825010) DBG | Writing SSH key tar header
	I0408 11:21:14.339741  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:14.339498  376701 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010 ...
	I0408 11:21:14.339753  376679 main.go:141] libmachine: (addons-825010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010
	I0408 11:21:14.339764  376679 main.go:141] libmachine: (addons-825010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 11:21:14.339774  376679 main.go:141] libmachine: (addons-825010) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 11:21:14.339783  376679 main.go:141] libmachine: (addons-825010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:21:14.339799  376679 main.go:141] libmachine: (addons-825010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 11:21:14.339809  376679 main.go:141] libmachine: (addons-825010) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 11:21:14.339819  376679 main.go:141] libmachine: (addons-825010) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 11:21:14.339852  376679 main.go:141] libmachine: (addons-825010) DBG | Checking permissions on dir: /home/jenkins
	I0408 11:21:14.339878  376679 main.go:141] libmachine: (addons-825010) DBG | Checking permissions on dir: /home
	I0408 11:21:14.339890  376679 main.go:141] libmachine: (addons-825010) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 11:21:14.339900  376679 main.go:141] libmachine: (addons-825010) DBG | Skipping /home - not owner
	I0408 11:21:14.339907  376679 main.go:141] libmachine: (addons-825010) Creating domain...
	I0408 11:21:14.340997  376679 main.go:141] libmachine: (addons-825010) define libvirt domain using xml: 
	I0408 11:21:14.341033  376679 main.go:141] libmachine: (addons-825010) <domain type='kvm'>
	I0408 11:21:14.341054  376679 main.go:141] libmachine: (addons-825010)   <name>addons-825010</name>
	I0408 11:21:14.341067  376679 main.go:141] libmachine: (addons-825010)   <memory unit='MiB'>4000</memory>
	I0408 11:21:14.341076  376679 main.go:141] libmachine: (addons-825010)   <vcpu>2</vcpu>
	I0408 11:21:14.341084  376679 main.go:141] libmachine: (addons-825010)   <features>
	I0408 11:21:14.341095  376679 main.go:141] libmachine: (addons-825010)     <acpi/>
	I0408 11:21:14.341099  376679 main.go:141] libmachine: (addons-825010)     <apic/>
	I0408 11:21:14.341105  376679 main.go:141] libmachine: (addons-825010)     <pae/>
	I0408 11:21:14.341110  376679 main.go:141] libmachine: (addons-825010)     
	I0408 11:21:14.341115  376679 main.go:141] libmachine: (addons-825010)   </features>
	I0408 11:21:14.341122  376679 main.go:141] libmachine: (addons-825010)   <cpu mode='host-passthrough'>
	I0408 11:21:14.341129  376679 main.go:141] libmachine: (addons-825010)   
	I0408 11:21:14.341138  376679 main.go:141] libmachine: (addons-825010)   </cpu>
	I0408 11:21:14.341154  376679 main.go:141] libmachine: (addons-825010)   <os>
	I0408 11:21:14.341165  376679 main.go:141] libmachine: (addons-825010)     <type>hvm</type>
	I0408 11:21:14.341174  376679 main.go:141] libmachine: (addons-825010)     <boot dev='cdrom'/>
	I0408 11:21:14.341179  376679 main.go:141] libmachine: (addons-825010)     <boot dev='hd'/>
	I0408 11:21:14.341186  376679 main.go:141] libmachine: (addons-825010)     <bootmenu enable='no'/>
	I0408 11:21:14.341190  376679 main.go:141] libmachine: (addons-825010)   </os>
	I0408 11:21:14.341196  376679 main.go:141] libmachine: (addons-825010)   <devices>
	I0408 11:21:14.341201  376679 main.go:141] libmachine: (addons-825010)     <disk type='file' device='cdrom'>
	I0408 11:21:14.341210  376679 main.go:141] libmachine: (addons-825010)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/boot2docker.iso'/>
	I0408 11:21:14.341221  376679 main.go:141] libmachine: (addons-825010)       <target dev='hdc' bus='scsi'/>
	I0408 11:21:14.341226  376679 main.go:141] libmachine: (addons-825010)       <readonly/>
	I0408 11:21:14.341230  376679 main.go:141] libmachine: (addons-825010)     </disk>
	I0408 11:21:14.341253  376679 main.go:141] libmachine: (addons-825010)     <disk type='file' device='disk'>
	I0408 11:21:14.341272  376679 main.go:141] libmachine: (addons-825010)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 11:21:14.341281  376679 main.go:141] libmachine: (addons-825010)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/addons-825010.rawdisk'/>
	I0408 11:21:14.341289  376679 main.go:141] libmachine: (addons-825010)       <target dev='hda' bus='virtio'/>
	I0408 11:21:14.341296  376679 main.go:141] libmachine: (addons-825010)     </disk>
	I0408 11:21:14.341303  376679 main.go:141] libmachine: (addons-825010)     <interface type='network'>
	I0408 11:21:14.341313  376679 main.go:141] libmachine: (addons-825010)       <source network='mk-addons-825010'/>
	I0408 11:21:14.341318  376679 main.go:141] libmachine: (addons-825010)       <model type='virtio'/>
	I0408 11:21:14.341324  376679 main.go:141] libmachine: (addons-825010)     </interface>
	I0408 11:21:14.341329  376679 main.go:141] libmachine: (addons-825010)     <interface type='network'>
	I0408 11:21:14.341340  376679 main.go:141] libmachine: (addons-825010)       <source network='default'/>
	I0408 11:21:14.341360  376679 main.go:141] libmachine: (addons-825010)       <model type='virtio'/>
	I0408 11:21:14.341366  376679 main.go:141] libmachine: (addons-825010)     </interface>
	I0408 11:21:14.341374  376679 main.go:141] libmachine: (addons-825010)     <serial type='pty'>
	I0408 11:21:14.341380  376679 main.go:141] libmachine: (addons-825010)       <target port='0'/>
	I0408 11:21:14.341387  376679 main.go:141] libmachine: (addons-825010)     </serial>
	I0408 11:21:14.341393  376679 main.go:141] libmachine: (addons-825010)     <console type='pty'>
	I0408 11:21:14.341402  376679 main.go:141] libmachine: (addons-825010)       <target type='serial' port='0'/>
	I0408 11:21:14.341409  376679 main.go:141] libmachine: (addons-825010)     </console>
	I0408 11:21:14.341414  376679 main.go:141] libmachine: (addons-825010)     <rng model='virtio'>
	I0408 11:21:14.341421  376679 main.go:141] libmachine: (addons-825010)       <backend model='random'>/dev/random</backend>
	I0408 11:21:14.341430  376679 main.go:141] libmachine: (addons-825010)     </rng>
	I0408 11:21:14.341451  376679 main.go:141] libmachine: (addons-825010)     
	I0408 11:21:14.341470  376679 main.go:141] libmachine: (addons-825010)     
	I0408 11:21:14.341485  376679 main.go:141] libmachine: (addons-825010)   </devices>
	I0408 11:21:14.341501  376679 main.go:141] libmachine: (addons-825010) </domain>
	I0408 11:21:14.341515  376679 main.go:141] libmachine: (addons-825010) 
	I0408 11:21:14.347721  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:45:04:ce in network default
	I0408 11:21:14.348290  376679 main.go:141] libmachine: (addons-825010) Ensuring networks are active...
	I0408 11:21:14.348314  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:14.348994  376679 main.go:141] libmachine: (addons-825010) Ensuring network default is active
	I0408 11:21:14.349376  376679 main.go:141] libmachine: (addons-825010) Ensuring network mk-addons-825010 is active
	I0408 11:21:14.349959  376679 main.go:141] libmachine: (addons-825010) Getting domain xml...
	I0408 11:21:14.350676  376679 main.go:141] libmachine: (addons-825010) Creating domain...
	I0408 11:21:15.743922  376679 main.go:141] libmachine: (addons-825010) Waiting to get IP...
	I0408 11:21:15.744627  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:15.745009  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:15.745072  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:15.745007  376701 retry.go:31] will retry after 202.525244ms: waiting for machine to come up
	I0408 11:21:15.949662  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:15.950014  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:15.950044  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:15.949964  376701 retry.go:31] will retry after 272.678085ms: waiting for machine to come up
	I0408 11:21:16.224434  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:16.224890  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:16.224919  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:16.224857  376701 retry.go:31] will retry after 360.484519ms: waiting for machine to come up
	I0408 11:21:16.587733  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:16.588351  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:16.588390  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:16.588290  376701 retry.go:31] will retry after 565.797348ms: waiting for machine to come up
	I0408 11:21:17.156216  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:17.156721  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:17.156745  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:17.156655  376701 retry.go:31] will retry after 542.138216ms: waiting for machine to come up
	I0408 11:21:17.700339  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:17.700859  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:17.700900  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:17.700800  376701 retry.go:31] will retry after 605.902204ms: waiting for machine to come up
	I0408 11:21:18.308752  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:18.309218  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:18.309246  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:18.309161  376701 retry.go:31] will retry after 786.806662ms: waiting for machine to come up
	I0408 11:21:19.097713  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:19.098117  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:19.098151  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:19.098054  376701 retry.go:31] will retry after 1.105434838s: waiting for machine to come up
	I0408 11:21:20.205520  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:20.205885  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:20.205909  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:20.205843  376701 retry.go:31] will retry after 1.402118742s: waiting for machine to come up
	I0408 11:21:21.609523  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:21.609944  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:21.609977  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:21.609896  376701 retry.go:31] will retry after 1.476247353s: waiting for machine to come up
	I0408 11:21:23.088666  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:23.089210  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:23.089250  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:23.089147  376701 retry.go:31] will retry after 2.122148178s: waiting for machine to come up
	I0408 11:21:25.212791  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:25.213189  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:25.213208  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:25.213159  376701 retry.go:31] will retry after 3.090381233s: waiting for machine to come up
	I0408 11:21:28.304712  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:28.305056  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:28.305087  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:28.305010  376701 retry.go:31] will retry after 4.147827376s: waiting for machine to come up
	I0408 11:21:32.458579  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:32.458990  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find current IP address of domain addons-825010 in network mk-addons-825010
	I0408 11:21:32.459017  376679 main.go:141] libmachine: (addons-825010) DBG | I0408 11:21:32.458956  376701 retry.go:31] will retry after 5.424531482s: waiting for machine to come up
	I0408 11:21:37.888800  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:37.889270  376679 main.go:141] libmachine: (addons-825010) Found IP for machine: 192.168.39.221
	I0408 11:21:37.889291  376679 main.go:141] libmachine: (addons-825010) Reserving static IP address...
	I0408 11:21:37.889305  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has current primary IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:37.889701  376679 main.go:141] libmachine: (addons-825010) DBG | unable to find host DHCP lease matching {name: "addons-825010", mac: "52:54:00:a9:c8:e0", ip: "192.168.39.221"} in network mk-addons-825010
	I0408 11:21:37.965211  376679 main.go:141] libmachine: (addons-825010) DBG | Getting to WaitForSSH function...
	I0408 11:21:37.965255  376679 main.go:141] libmachine: (addons-825010) Reserved static IP address: 192.168.39.221
	I0408 11:21:37.965292  376679 main.go:141] libmachine: (addons-825010) Waiting for SSH to be available...
	I0408 11:21:37.967900  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:37.968271  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:37.968306  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:37.968478  376679 main.go:141] libmachine: (addons-825010) DBG | Using SSH client type: external
	I0408 11:21:37.968501  376679 main.go:141] libmachine: (addons-825010) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa (-rw-------)
	I0408 11:21:37.968526  376679 main.go:141] libmachine: (addons-825010) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 11:21:37.968549  376679 main.go:141] libmachine: (addons-825010) DBG | About to run SSH command:
	I0408 11:21:37.968558  376679 main.go:141] libmachine: (addons-825010) DBG | exit 0
	I0408 11:21:38.099766  376679 main.go:141] libmachine: (addons-825010) DBG | SSH cmd err, output: <nil>: 
	I0408 11:21:38.100058  376679 main.go:141] libmachine: (addons-825010) KVM machine creation complete!
	I0408 11:21:38.100453  376679 main.go:141] libmachine: (addons-825010) Calling .GetConfigRaw
	I0408 11:21:38.101129  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:21:38.101325  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:21:38.101545  376679 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 11:21:38.101565  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:21:38.102823  376679 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 11:21:38.102840  376679 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 11:21:38.102848  376679 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 11:21:38.102857  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:38.105293  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.105832  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:38.105889  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.106078  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:38.106256  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.106425  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.106558  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:38.106739  376679 main.go:141] libmachine: Using SSH client type: native
	I0408 11:21:38.106948  376679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0408 11:21:38.106959  376679 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 11:21:38.219214  376679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:21:38.219238  376679 main.go:141] libmachine: Detecting the provisioner...
	I0408 11:21:38.219247  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:38.222063  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.222374  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:38.222410  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.222612  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:38.222841  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.223027  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.223154  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:38.223313  376679 main.go:141] libmachine: Using SSH client type: native
	I0408 11:21:38.223518  376679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0408 11:21:38.223530  376679 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 11:21:38.336588  376679 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 11:21:38.336682  376679 main.go:141] libmachine: found compatible host: buildroot
	I0408 11:21:38.336696  376679 main.go:141] libmachine: Provisioning with buildroot...
	I0408 11:21:38.336710  376679 main.go:141] libmachine: (addons-825010) Calling .GetMachineName
	I0408 11:21:38.336992  376679 buildroot.go:166] provisioning hostname "addons-825010"
	I0408 11:21:38.337015  376679 main.go:141] libmachine: (addons-825010) Calling .GetMachineName
	I0408 11:21:38.337219  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:38.339839  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.340141  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:38.340170  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.340278  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:38.340501  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.340659  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.340823  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:38.340985  376679 main.go:141] libmachine: Using SSH client type: native
	I0408 11:21:38.341199  376679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0408 11:21:38.341218  376679 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-825010 && echo "addons-825010" | sudo tee /etc/hostname
	I0408 11:21:38.466966  376679 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-825010
	
	I0408 11:21:38.467003  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:38.469877  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.470390  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:38.470441  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.470590  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:38.470828  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.471033  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.471165  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:38.471343  376679 main.go:141] libmachine: Using SSH client type: native
	I0408 11:21:38.471552  376679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0408 11:21:38.471571  376679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-825010' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-825010/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-825010' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:21:38.593820  376679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:21:38.593861  376679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:21:38.593887  376679 buildroot.go:174] setting up certificates
	I0408 11:21:38.593901  376679 provision.go:84] configureAuth start
	I0408 11:21:38.593912  376679 main.go:141] libmachine: (addons-825010) Calling .GetMachineName
	I0408 11:21:38.594234  376679 main.go:141] libmachine: (addons-825010) Calling .GetIP
	I0408 11:21:38.597074  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.597612  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:38.597644  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.597789  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:38.600065  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.600350  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:38.600396  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.600491  376679 provision.go:143] copyHostCerts
	I0408 11:21:38.600575  376679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:21:38.600726  376679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:21:38.600802  376679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:21:38.600909  376679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.addons-825010 san=[127.0.0.1 192.168.39.221 addons-825010 localhost minikube]
	I0408 11:21:38.845654  376679 provision.go:177] copyRemoteCerts
	I0408 11:21:38.845717  376679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:21:38.845744  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:38.848962  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.849273  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:38.849319  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:38.849503  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:38.849705  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:38.849889  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:38.850048  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:21:38.939228  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:21:38.964124  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 11:21:38.989454  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 11:21:39.014728  376679 provision.go:87] duration metric: took 420.808355ms to configureAuth
	I0408 11:21:39.014764  376679 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:21:39.014986  376679 config.go:182] Loaded profile config "addons-825010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:21:39.015112  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:39.017961  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.018275  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:39.018306  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.018471  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:39.018685  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:39.018859  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:39.018992  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:39.019186  376679 main.go:141] libmachine: Using SSH client type: native
	I0408 11:21:39.019398  376679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0408 11:21:39.019415  376679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:21:39.300307  376679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:21:39.300340  376679 main.go:141] libmachine: Checking connection to Docker...
	I0408 11:21:39.300352  376679 main.go:141] libmachine: (addons-825010) Calling .GetURL
	I0408 11:21:39.301777  376679 main.go:141] libmachine: (addons-825010) DBG | Using libvirt version 6000000
	I0408 11:21:39.303991  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.304318  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:39.304351  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.304559  376679 main.go:141] libmachine: Docker is up and running!
	I0408 11:21:39.304579  376679 main.go:141] libmachine: Reticulating splines...
	I0408 11:21:39.304588  376679 client.go:171] duration metric: took 25.72480915s to LocalClient.Create
	I0408 11:21:39.304616  376679 start.go:167] duration metric: took 25.724881836s to libmachine.API.Create "addons-825010"
	I0408 11:21:39.304629  376679 start.go:293] postStartSetup for "addons-825010" (driver="kvm2")
	I0408 11:21:39.304643  376679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:21:39.304665  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:21:39.304921  376679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:21:39.304950  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:39.306848  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.307168  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:39.307193  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.307389  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:39.307586  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:39.307817  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:39.307974  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:21:39.395124  376679 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:21:39.399987  376679 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:21:39.400018  376679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:21:39.400096  376679 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:21:39.400130  376679 start.go:296] duration metric: took 95.493084ms for postStartSetup
	I0408 11:21:39.400174  376679 main.go:141] libmachine: (addons-825010) Calling .GetConfigRaw
	I0408 11:21:39.400887  376679 main.go:141] libmachine: (addons-825010) Calling .GetIP
	I0408 11:21:39.403497  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.403853  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:39.403886  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.404126  376679 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/config.json ...
	I0408 11:21:39.404355  376679 start.go:128] duration metric: took 25.84361593s to createHost
	I0408 11:21:39.404381  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:39.406598  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.406920  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:39.406952  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.407053  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:39.407293  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:39.407472  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:39.407637  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:39.407827  376679 main.go:141] libmachine: Using SSH client type: native
	I0408 11:21:39.408026  376679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0408 11:21:39.408040  376679 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:21:39.520757  376679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712575299.505228420
	
	I0408 11:21:39.520785  376679 fix.go:216] guest clock: 1712575299.505228420
	I0408 11:21:39.520811  376679 fix.go:229] Guest: 2024-04-08 11:21:39.50522842 +0000 UTC Remote: 2024-04-08 11:21:39.404367331 +0000 UTC m=+25.964636668 (delta=100.861089ms)
	I0408 11:21:39.520842  376679 fix.go:200] guest clock delta is within tolerance: 100.861089ms
	I0408 11:21:39.520849  376679 start.go:83] releasing machines lock for "addons-825010", held for 25.960199248s
	I0408 11:21:39.520876  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:21:39.521230  376679 main.go:141] libmachine: (addons-825010) Calling .GetIP
	I0408 11:21:39.524115  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.524490  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:39.524513  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.524647  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:21:39.525212  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:21:39.525392  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:21:39.525456  376679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:21:39.525515  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:39.525667  376679 ssh_runner.go:195] Run: cat /version.json
	I0408 11:21:39.525694  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:21:39.527944  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.528128  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.528447  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:39.528483  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.528577  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:39.528596  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:39.528639  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:39.528763  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:21:39.528764  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:39.529016  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:21:39.529019  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:39.529204  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:21:39.529383  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:21:39.529385  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:21:39.645817  376679 ssh_runner.go:195] Run: systemctl --version
	I0408 11:21:39.652166  376679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:21:39.812043  376679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:21:39.818735  376679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:21:39.818874  376679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:21:39.835833  376679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 11:21:39.835862  376679 start.go:494] detecting cgroup driver to use...
	I0408 11:21:39.835929  376679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:21:39.852268  376679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:21:39.866309  376679 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:21:39.866383  376679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:21:39.880002  376679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:21:39.893561  376679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:21:40.008192  376679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:21:40.173090  376679 docker.go:233] disabling docker service ...
	I0408 11:21:40.173191  376679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:21:40.189804  376679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:21:40.203530  376679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:21:40.322409  376679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:21:40.445925  376679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:21:40.460582  376679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:21:40.481205  376679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:21:40.481302  376679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:21:40.493438  376679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:21:40.493532  376679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:21:40.505836  376679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:21:40.518011  376679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:21:40.530127  376679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:21:40.542072  376679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:21:40.553943  376679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:21:40.573098  376679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:21:40.585273  376679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:21:40.596216  376679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 11:21:40.596292  376679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 11:21:40.611833  376679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:21:40.622788  376679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:21:40.756258  376679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:21:40.901649  376679 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:21:40.901759  376679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:21:40.906671  376679 start.go:562] Will wait 60s for crictl version
	I0408 11:21:40.906763  376679 ssh_runner.go:195] Run: which crictl
	I0408 11:21:40.910850  376679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:21:40.956629  376679 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:21:40.956739  376679 ssh_runner.go:195] Run: crio --version
	I0408 11:21:40.986069  376679 ssh_runner.go:195] Run: crio --version
	I0408 11:21:41.016477  376679 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:21:41.017879  376679 main.go:141] libmachine: (addons-825010) Calling .GetIP
	I0408 11:21:41.020453  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:41.020777  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:21:41.020810  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:21:41.021053  376679 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:21:41.025885  376679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:21:41.040132  376679 kubeadm.go:877] updating cluster {Name:addons-825010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-825010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 11:21:41.040287  376679 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:21:41.040334  376679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:21:41.077155  376679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 11:21:41.077242  376679 ssh_runner.go:195] Run: which lz4
	I0408 11:21:41.081882  376679 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 11:21:41.086369  376679 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 11:21:41.086413  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 11:21:42.572853  376679 crio.go:462] duration metric: took 1.490999021s to copy over tarball
	I0408 11:21:42.572947  376679 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 11:21:44.999720  376679 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.426716248s)
	I0408 11:21:44.999757  376679 crio.go:469] duration metric: took 2.426862523s to extract the tarball
	I0408 11:21:44.999765  376679 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 11:21:45.038438  376679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:21:45.082355  376679 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 11:21:45.082389  376679 cache_images.go:84] Images are preloaded, skipping loading
	I0408 11:21:45.082400  376679 kubeadm.go:928] updating node { 192.168.39.221 8443 v1.29.3 crio true true} ...
	I0408 11:21:45.082525  376679 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-825010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-825010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:21:45.082600  376679 ssh_runner.go:195] Run: crio config
	I0408 11:21:45.130540  376679 cni.go:84] Creating CNI manager for ""
	I0408 11:21:45.130567  376679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 11:21:45.130580  376679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 11:21:45.130603  376679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-825010 NodeName:addons-825010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 11:21:45.130753  376679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-825010"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 11:21:45.130851  376679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:21:45.141331  376679 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 11:21:45.141415  376679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 11:21:45.151145  376679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0408 11:21:45.170173  376679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:21:45.188448  376679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0408 11:21:45.206831  376679 ssh_runner.go:195] Run: grep 192.168.39.221	control-plane.minikube.internal$ /etc/hosts
	I0408 11:21:45.210907  376679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:21:45.223935  376679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:21:45.351732  376679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:21:45.370574  376679 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010 for IP: 192.168.39.221
	I0408 11:21:45.370608  376679 certs.go:194] generating shared ca certs ...
	I0408 11:21:45.370634  376679 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.370826  376679 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:21:45.492832  376679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt ...
	I0408 11:21:45.492869  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt: {Name:mk3d2561361caf44bbe17f1d5526a9e2c28d46b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.493047  376679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key ...
	I0408 11:21:45.493070  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key: {Name:mk6b52dea25a829733d3fbb9f9d912018e367129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.493144  376679 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:21:45.617463  376679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt ...
	I0408 11:21:45.617496  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt: {Name:mka6b4b133aad620a049945bfd0bc14a8e481ab5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.617663  376679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key ...
	I0408 11:21:45.617675  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key: {Name:mkc6ad332a9d735fb65cd15d54e5efe0c92c9d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.617743  376679 certs.go:256] generating profile certs ...
	I0408 11:21:45.617806  376679 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.key
	I0408 11:21:45.617821  376679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt with IP's: []
	I0408 11:21:45.813592  376679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt ...
	I0408 11:21:45.813631  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: {Name:mk2e3e40c0d47199c8ef954eabb88d51beacba5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.813814  376679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.key ...
	I0408 11:21:45.813830  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.key: {Name:mk0ed45af2da7794bb2f124f759e6b91952d520b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.813905  376679 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.key.45e34f5f
	I0408 11:21:45.813925  376679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.crt.45e34f5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.221]
	I0408 11:21:45.910293  376679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.crt.45e34f5f ...
	I0408 11:21:45.910332  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.crt.45e34f5f: {Name:mk1c35432c199af68c2223c5ff1ff5d8d6971e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.910500  376679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.key.45e34f5f ...
	I0408 11:21:45.910515  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.key.45e34f5f: {Name:mk05e4c51848e663c630a197fa6f41240edfe085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:45.910583  376679 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.crt.45e34f5f -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.crt
	I0408 11:21:45.910655  376679 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.key.45e34f5f -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.key
	I0408 11:21:45.910699  376679 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/proxy-client.key
	I0408 11:21:45.910716  376679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/proxy-client.crt with IP's: []
	I0408 11:21:46.128236  376679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/proxy-client.crt ...
	I0408 11:21:46.128272  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/proxy-client.crt: {Name:mk77b7b7963ba31a4d4f3b6540116fd0e196cc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:46.128448  376679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/proxy-client.key ...
	I0408 11:21:46.128463  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/proxy-client.key: {Name:mkf83db3f008a91bef4d89e0cf9268908e9d85a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:21:46.128632  376679 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:21:46.128674  376679 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:21:46.128700  376679 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:21:46.128722  376679 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:21:46.129354  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:21:46.177605  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:21:46.219596  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:21:46.252390  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:21:46.278150  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0408 11:21:46.304354  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 11:21:46.330416  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:21:46.357567  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:21:46.384834  376679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:21:46.411991  376679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 11:21:46.430445  376679 ssh_runner.go:195] Run: openssl version
	I0408 11:21:46.436755  376679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:21:46.448492  376679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:21:46.453142  376679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:21:46.453224  376679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:21:46.459210  376679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:21:46.470580  376679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:21:46.475241  376679 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 11:21:46.475301  376679 kubeadm.go:391] StartCluster: {Name:addons-825010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-825010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:21:46.475378  376679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 11:21:46.475427  376679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 11:21:46.518100  376679 cri.go:89] found id: ""
	I0408 11:21:46.518177  376679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 11:21:46.528923  376679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 11:21:46.539235  376679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 11:21:46.548758  376679 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 11:21:46.548784  376679 kubeadm.go:156] found existing configuration files:
	
	I0408 11:21:46.548843  376679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 11:21:46.557858  376679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 11:21:46.557935  376679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 11:21:46.567542  376679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 11:21:46.577132  376679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 11:21:46.577208  376679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 11:21:46.587377  376679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 11:21:46.596728  376679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 11:21:46.596806  376679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 11:21:46.606524  376679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 11:21:46.616114  376679 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 11:21:46.616202  376679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 11:21:46.626294  376679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 11:21:46.678464  376679 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 11:21:46.678554  376679 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 11:21:46.810946  376679 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 11:21:46.811122  376679 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 11:21:46.811234  376679 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 11:21:47.076815  376679 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 11:21:47.193386  376679 out.go:204]   - Generating certificates and keys ...
	I0408 11:21:47.193528  376679 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 11:21:47.193652  376679 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 11:21:47.274768  376679 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 11:21:47.353434  376679 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 11:21:47.636868  376679 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 11:21:47.862779  376679 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 11:21:48.148917  376679 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 11:21:48.149058  376679 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-825010 localhost] and IPs [192.168.39.221 127.0.0.1 ::1]
	I0408 11:21:48.520234  376679 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 11:21:48.597085  376679 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-825010 localhost] and IPs [192.168.39.221 127.0.0.1 ::1]
	I0408 11:21:48.791896  376679 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 11:21:48.974969  376679 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 11:21:49.069034  376679 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 11:21:49.069142  376679 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 11:21:49.213103  376679 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 11:21:49.288132  376679 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 11:21:49.458437  376679 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 11:21:49.646182  376679 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 11:21:49.819531  376679 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 11:21:49.819942  376679 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 11:21:49.822436  376679 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 11:21:49.824465  376679 out.go:204]   - Booting up control plane ...
	I0408 11:21:49.824586  376679 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 11:21:49.824660  376679 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 11:21:49.824719  376679 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 11:21:49.844266  376679 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 11:21:49.846698  376679 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 11:21:49.846771  376679 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 11:21:49.978100  376679 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 11:21:55.980888  376679 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002209 seconds
	I0408 11:21:55.997857  376679 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 11:21:56.014334  376679 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 11:21:56.557423  376679 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 11:21:56.557610  376679 kubeadm.go:309] [mark-control-plane] Marking the node addons-825010 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 11:21:57.079534  376679 kubeadm.go:309] [bootstrap-token] Using token: u6d9v4.6flp7oj4s1ti43zl
	I0408 11:21:57.081360  376679 out.go:204]   - Configuring RBAC rules ...
	I0408 11:21:57.081533  376679 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 11:21:57.088777  376679 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 11:21:57.116944  376679 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 11:21:57.138467  376679 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 11:21:57.147290  376679 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 11:21:57.150855  376679 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 11:21:57.179643  376679 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 11:21:57.421202  376679 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 11:21:57.495502  376679 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 11:21:57.495531  376679 kubeadm.go:309] 
	I0408 11:21:57.495589  376679 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 11:21:57.495593  376679 kubeadm.go:309] 
	I0408 11:21:57.495673  376679 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 11:21:57.495707  376679 kubeadm.go:309] 
	I0408 11:21:57.495776  376679 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 11:21:57.495923  376679 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 11:21:57.496022  376679 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 11:21:57.496044  376679 kubeadm.go:309] 
	I0408 11:21:57.496149  376679 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 11:21:57.496173  376679 kubeadm.go:309] 
	I0408 11:21:57.496237  376679 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 11:21:57.496258  376679 kubeadm.go:309] 
	I0408 11:21:57.496337  376679 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 11:21:57.496440  376679 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 11:21:57.496527  376679 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 11:21:57.496537  376679 kubeadm.go:309] 
	I0408 11:21:57.496669  376679 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 11:21:57.496770  376679 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 11:21:57.496782  376679 kubeadm.go:309] 
	I0408 11:21:57.496911  376679 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token u6d9v4.6flp7oj4s1ti43zl \
	I0408 11:21:57.497050  376679 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 11:21:57.497085  376679 kubeadm.go:309] 	--control-plane 
	I0408 11:21:57.497094  376679 kubeadm.go:309] 
	I0408 11:21:57.497204  376679 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 11:21:57.497227  376679 kubeadm.go:309] 
	I0408 11:21:57.497359  376679 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token u6d9v4.6flp7oj4s1ti43zl \
	I0408 11:21:57.497517  376679 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 11:21:57.498456  376679 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 11:21:57.498494  376679 cni.go:84] Creating CNI manager for ""
	I0408 11:21:57.498505  376679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 11:21:57.501574  376679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 11:21:57.503076  376679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 11:21:57.546706  376679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 11:21:57.578124  376679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 11:21:57.578217  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-825010 minikube.k8s.io/updated_at=2024_04_08T11_21_57_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=addons-825010 minikube.k8s.io/primary=true
	I0408 11:21:57.578244  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:21:57.642274  376679 ops.go:34] apiserver oom_adj: -16
	I0408 11:21:57.800995  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:21:58.301810  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:21:58.801098  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:21:59.302007  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:21:59.801424  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:00.301095  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:00.801541  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:01.301607  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:01.801673  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:02.301018  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:02.801578  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:03.301274  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:03.801202  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:04.301133  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:04.801838  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:05.302064  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:05.801581  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:06.301145  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:06.801017  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:07.301050  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:07.801191  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:08.301409  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:08.801738  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:09.302045  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:09.801618  376679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:22:09.896381  376679 kubeadm.go:1107] duration metric: took 12.318239138s to wait for elevateKubeSystemPrivileges
	W0408 11:22:09.896426  376679 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 11:22:09.896435  376679 kubeadm.go:393] duration metric: took 23.421141229s to StartCluster
	I0408 11:22:09.896495  376679 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:22:09.896661  376679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:22:09.897112  376679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:22:09.897512  376679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 11:22:09.897546  376679 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:22:09.899166  376679 out.go:177] * Verifying Kubernetes components...
	I0408 11:22:09.897639  376679 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0408 11:22:09.897765  376679 config.go:182] Loaded profile config "addons-825010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:22:09.900887  376679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:22:09.900905  376679 addons.go:69] Setting yakd=true in profile "addons-825010"
	I0408 11:22:09.900920  376679 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-825010"
	I0408 11:22:09.900951  376679 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-825010"
	I0408 11:22:09.900945  376679 addons.go:69] Setting cloud-spanner=true in profile "addons-825010"
	I0408 11:22:09.900959  376679 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-825010"
	I0408 11:22:09.901036  376679 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-825010"
	I0408 11:22:09.900953  376679 addons.go:234] Setting addon yakd=true in "addons-825010"
	I0408 11:22:09.901169  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.900955  376679 addons.go:69] Setting storage-provisioner=true in profile "addons-825010"
	I0408 11:22:09.901247  376679 addons.go:234] Setting addon storage-provisioner=true in "addons-825010"
	I0408 11:22:09.900921  376679 addons.go:69] Setting default-storageclass=true in profile "addons-825010"
	I0408 11:22:09.901352  376679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-825010"
	I0408 11:22:09.900972  376679 addons.go:69] Setting registry=true in profile "addons-825010"
	I0408 11:22:09.901412  376679 addons.go:234] Setting addon registry=true in "addons-825010"
	I0408 11:22:09.901442  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.900979  376679 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-825010"
	I0408 11:22:09.901487  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.900906  376679 addons.go:69] Setting gcp-auth=true in profile "addons-825010"
	I0408 11:22:09.901564  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.901589  376679 mustload.go:65] Loading cluster: addons-825010
	I0408 11:22:09.901597  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.900979  376679 addons.go:69] Setting volumesnapshots=true in profile "addons-825010"
	I0408 11:22:09.901691  376679 addons.go:234] Setting addon volumesnapshots=true in "addons-825010"
	I0408 11:22:09.901727  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.901794  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.901801  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.901804  376679 config.go:182] Loaded profile config "addons-825010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:22:09.901817  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.901818  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.900979  376679 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-825010"
	I0408 11:22:09.901869  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.901891  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.901892  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.900986  376679 addons.go:69] Setting helm-tiller=true in profile "addons-825010"
	I0408 11:22:09.902072  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.902126  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.902155  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.900982  376679 addons.go:234] Setting addon cloud-spanner=true in "addons-825010"
	I0408 11:22:09.902200  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.902206  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.902074  376679 addons.go:234] Setting addon helm-tiller=true in "addons-825010"
	I0408 11:22:09.901005  376679 addons.go:69] Setting metrics-server=true in profile "addons-825010"
	I0408 11:22:09.902270  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.902275  376679 addons.go:234] Setting addon metrics-server=true in "addons-825010"
	I0408 11:22:09.901307  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.900989  376679 addons.go:69] Setting inspektor-gadget=true in profile "addons-825010"
	I0408 11:22:09.902363  376679 addons.go:234] Setting addon inspektor-gadget=true in "addons-825010"
	I0408 11:22:09.902303  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.902409  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.902494  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.902533  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.900991  376679 addons.go:69] Setting ingress-dns=true in profile "addons-825010"
	I0408 11:22:09.902396  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.902584  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.902634  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.902638  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.902658  376679 addons.go:234] Setting addon ingress-dns=true in "addons-825010"
	I0408 11:22:09.900987  376679 addons.go:69] Setting ingress=true in profile "addons-825010"
	I0408 11:22:09.902881  376679 addons.go:234] Setting addon ingress=true in "addons-825010"
	I0408 11:22:09.902726  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.902909  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.902825  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.902956  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.902969  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.902979  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.906109  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.906198  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.906529  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.906571  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.906628  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.906666  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.925611  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0408 11:22:09.926315  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.927060  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.927090  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.927566  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.927811  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:09.927982  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0408 11:22:09.928683  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.929413  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.929433  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.931925  376679 addons.go:234] Setting addon default-storageclass=true in "addons-825010"
	I0408 11:22:09.931984  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.932415  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.932462  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.932695  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I0408 11:22:09.933262  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.933426  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.934026  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.934074  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.934388  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.934412  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.934880  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.935499  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.935546  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.936255  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.936293  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.946087  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0408 11:22:09.946709  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.947334  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.947359  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.947946  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.948583  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.948635  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.949446  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0408 11:22:09.949965  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.950402  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45389
	I0408 11:22:09.950711  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.950743  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.950817  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.951215  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.951385  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.951422  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.951816  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.951828  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.951871  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.954107  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0408 11:22:09.954260  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0408 11:22:09.954949  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.954984  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.955231  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.955799  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.955821  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.955821  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.956208  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.956440  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:09.956609  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.956624  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.957033  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.957968  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.958003  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.959121  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.959561  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.959611  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.966261  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0408 11:22:09.966951  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.967628  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.967649  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.968185  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.968864  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.968896  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.969206  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I0408 11:22:09.969955  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.970080  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I0408 11:22:09.970533  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.970556  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.970858  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.970945  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.971152  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:09.971582  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.971610  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.972526  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.973242  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.973298  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.974928  376679 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-825010"
	I0408 11:22:09.974986  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:09.975382  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.975435  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.977002  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0408 11:22:09.977175  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I0408 11:22:09.977475  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38639
	I0408 11:22:09.977839  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.978014  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.978553  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.978572  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.978876  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.978895  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.978948  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.979061  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0408 11:22:09.979612  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.979652  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.979846  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41995
	I0408 11:22:09.980116  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.980271  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.980685  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.980709  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.981386  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.981585  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0408 11:22:09.981772  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:09.981830  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.981923  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37427
	I0408 11:22:09.982074  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:09.982858  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.982885  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.982941  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.982950  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.983431  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.983450  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.983581  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.983592  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.983991  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.984076  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:09.984153  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:09.984242  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.986603  376679 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0408 11:22:09.984876  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.985008  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:09.985424  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:09.985844  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:09.985896  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.987988  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.988028  376679 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 11:22:09.988047  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0408 11:22:09.988073  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:09.988163  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:09.988382  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:09.988568  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:09.990736  376679 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0408 11:22:09.989229  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:09.992152  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:09.992964  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:09.992990  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:09.993058  376679 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0408 11:22:09.993069  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0408 11:22:09.993088  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:09.993173  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:09.993264  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:09.992770  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:09.993457  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:09.997396  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0408 11:22:09.993957  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:09.996189  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:09.996906  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:09.998110  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:09.999046  376679 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0408 11:22:09.999069  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0408 11:22:09.999095  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:09.999190  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:09.999242  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.000127  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.002190  376679 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0408 11:22:10.000326  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.003245  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36323
	I0408 11:22:10.004074  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37651
	I0408 11:22:10.006297  376679 out.go:177]   - Using image docker.io/registry:2.8.3
	I0408 11:22:10.004204  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.004763  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.005079  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.005130  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.005415  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.006643  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35713
	I0408 11:22:10.008031  376679 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0408 11:22:10.008047  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0408 11:22:10.008071  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.008144  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.008177  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.008459  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.008517  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.008836  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.009936  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.009958  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.010044  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.010463  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.010962  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.011336  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.011359  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.011478  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.011501  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.011905  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.012038  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.012283  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.012821  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:10.012870  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:10.014925  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.014995  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.015017  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.015033  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.015279  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.015493  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.015671  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.016965  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:10.016994  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:10.017256  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0408 11:22:10.017650  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42151
	I0408 11:22:10.017808  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.018267  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.018346  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.018378  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.018785  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.018805  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.018824  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.019046  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.019739  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.020797  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:10.020966  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:10.021768  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.024043  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0408 11:22:10.025744  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0408 11:22:10.025435  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0408 11:22:10.027393  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0408 11:22:10.027548  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I0408 11:22:10.028033  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.029050  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0408 11:22:10.029609  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.029900  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.031266  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.031342  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0408 11:22:10.033108  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.033537  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.034861  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0408 11:22:10.035003  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.035058  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.036689  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0408 11:22:10.038195  376679 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0408 11:22:10.040191  376679 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0408 11:22:10.040213  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0408 11:22:10.037863  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.040236  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.038933  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46121
	I0408 11:22:10.038942  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.039710  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35205
	I0408 11:22:10.039776  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0408 11:22:10.042524  376679 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0408 11:22:10.041040  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.041132  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.041265  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.041568  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.042886  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38891
	I0408 11:22:10.043364  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.044086  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.044116  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.044196  376679 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 11:22:10.044218  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 11:22:10.044243  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.043790  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I0408 11:22:10.043876  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.045408  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.045522  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.045537  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.045611  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.045626  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.045684  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.045703  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.045778  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.045908  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.045926  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.045987  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.046020  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.046221  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.046221  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.046291  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.046556  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.046565  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.046828  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.047019  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.047229  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.047255  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.047518  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.047662  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.047761  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.047954  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.048380  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.051151  376679 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0408 11:22:10.052665  376679 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 11:22:10.052691  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0408 11:22:10.052713  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.050349  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.050515  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.050599  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.052856  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.052888  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.051330  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.051362  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.051611  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.051901  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.054696  376679 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0408 11:22:10.053695  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.055960  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.056417  376679 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 11:22:10.056445  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.056157  376679 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0408 11:22:10.056170  376679 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0408 11:22:10.056247  376679 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0408 11:22:10.056384  376679 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0408 11:22:10.056111  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I0408 11:22:10.056704  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.056707  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.059479  376679 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0408 11:22:10.059502  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0408 11:22:10.059533  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.058131  376679 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 11:22:10.058149  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0408 11:22:10.058161  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.058222  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.058268  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.058609  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.060813  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0408 11:22:10.061725  376679 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0408 11:22:10.061746  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0408 11:22:10.063257  376679 out.go:177]   - Using image docker.io/busybox:stable
	I0408 11:22:10.061765  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.061788  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.062097  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.062448  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.062491  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:10.063505  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.064175  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.064832  376679 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 11:22:10.065983  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0408 11:22:10.066024  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.068372  376679 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0408 11:22:10.066221  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.066329  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.066650  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.067110  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:10.067281  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.068941  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.069559  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.070392  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.070421  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.070719  376679 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 11:22:10.070734  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0408 11:22:10.070752  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.070851  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.070900  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:10.071255  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.071280  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.069955  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.070150  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.071452  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.071528  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.071559  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.071572  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.071615  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.071654  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.071657  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.071665  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.072079  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.072085  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.072089  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.072093  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:10.072143  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.072263  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.072264  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.072381  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.072729  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:10.073325  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.073548  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.074210  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.074951  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.074973  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	W0408 11:22:10.075200  376679 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60608->192.168.39.221:22: read: connection reset by peer
	I0408 11:22:10.075227  376679 retry.go:31] will retry after 276.321036ms: ssh: handshake failed: read tcp 192.168.39.1:60608->192.168.39.221:22: read: connection reset by peer
	I0408 11:22:10.075277  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.075449  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.075507  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.077660  376679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 11:22:10.075857  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.076208  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:10.079173  376679 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 11:22:10.079194  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 11:22:10.079217  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.079258  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.079348  376679 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 11:22:10.079361  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 11:22:10.079377  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:10.083253  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.083314  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.083674  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.083715  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.083786  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:10.083805  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:10.083855  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.084034  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:10.084114  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.084289  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.084307  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:10.084462  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:10.084787  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:10.084944  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	W0408 11:22:10.085379  376679 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0408 11:22:10.085420  376679 retry.go:31] will retry after 232.396169ms: ssh: handshake failed: EOF
	I0408 11:22:10.525527  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0408 11:22:10.527396  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0408 11:22:10.552611  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 11:22:10.569634  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0408 11:22:10.575198  376679 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0408 11:22:10.575226  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0408 11:22:10.609305  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0408 11:22:10.615262  376679 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0408 11:22:10.615320  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0408 11:22:10.640648  376679 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0408 11:22:10.640680  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0408 11:22:10.652575  376679 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0408 11:22:10.652608  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0408 11:22:10.661975  376679 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0408 11:22:10.662020  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0408 11:22:10.672454  376679 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0408 11:22:10.672482  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0408 11:22:10.722034  376679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 11:22:10.722063  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0408 11:22:10.747939  376679 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0408 11:22:10.747971  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0408 11:22:10.752049  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 11:22:10.757456  376679 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0408 11:22:10.757482  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0408 11:22:10.784485  376679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:22:10.784554  376679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 11:22:10.880024  376679 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0408 11:22:10.880063  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0408 11:22:10.923338  376679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 11:22:10.923365  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 11:22:10.925986  376679 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0408 11:22:10.926019  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0408 11:22:10.959855  376679 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0408 11:22:10.959895  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0408 11:22:10.972201  376679 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0408 11:22:10.972246  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0408 11:22:11.032888  376679 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0408 11:22:11.032929  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0408 11:22:11.047531  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0408 11:22:11.054598  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0408 11:22:11.145042  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0408 11:22:11.151205  376679 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0408 11:22:11.151234  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0408 11:22:11.182912  376679 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 11:22:11.182950  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 11:22:11.196810  376679 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0408 11:22:11.196839  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0408 11:22:11.212882  376679 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0408 11:22:11.212943  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0408 11:22:11.308894  376679 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0408 11:22:11.308927  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0408 11:22:11.501676  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 11:22:11.506356  376679 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0408 11:22:11.506384  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0408 11:22:11.510735  376679 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0408 11:22:11.510761  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0408 11:22:11.521773  376679 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0408 11:22:11.521804  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0408 11:22:11.649210  376679 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0408 11:22:11.649240  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0408 11:22:11.760425  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0408 11:22:11.810494  376679 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 11:22:11.810523  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0408 11:22:11.846367  376679 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0408 11:22:11.846397  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0408 11:22:11.992604  376679 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0408 11:22:11.992637  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0408 11:22:12.033798  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 11:22:12.106819  376679 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0408 11:22:12.106852  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0408 11:22:12.274715  376679 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0408 11:22:12.274746  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0408 11:22:12.440107  376679 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0408 11:22:12.440137  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0408 11:22:12.504085  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0408 11:22:12.739834  376679 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0408 11:22:12.739869  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0408 11:22:13.033164  376679 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0408 11:22:13.033195  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0408 11:22:13.309439  376679 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 11:22:13.309469  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0408 11:22:13.686657  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0408 11:22:15.639671  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.114101223s)
	I0408 11:22:15.639720  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.112288785s)
	I0408 11:22:15.639766  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:15.639780  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:15.639780  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.087131927s)
	I0408 11:22:15.639815  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:15.639830  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:15.639766  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:15.639857  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:15.640107  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:15.640127  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:15.640138  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:15.640147  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:15.640553  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:15.640559  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:15.640559  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:15.640572  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:15.640576  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:15.640603  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:15.640606  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:15.640610  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:15.640613  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:15.640620  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:15.640622  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:15.640628  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:15.640628  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:15.641199  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:15.641234  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:15.641241  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:15.642359  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:15.642403  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:15.642410  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:15.703352  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:15.703379  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:15.703858  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:15.703885  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:15.703912  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:15.706090  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:15.706109  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:15.706380  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:15.706398  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:15.706381  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	W0408 11:22:15.706531  376679 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0408 11:22:16.825454  376679 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0408 11:22:16.825526  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:16.829120  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:16.829612  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:16.829643  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:16.829874  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:16.830124  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:16.830315  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:16.830474  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:17.377273  376679 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0408 11:22:17.555150  376679 addons.go:234] Setting addon gcp-auth=true in "addons-825010"
	I0408 11:22:17.555218  376679 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:22:17.555666  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:17.555731  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:17.570916  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0408 11:22:17.571381  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:17.571905  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:17.571929  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:17.572306  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:17.572817  376679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:22:17.572846  376679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:22:17.588071  376679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33559
	I0408 11:22:17.588512  376679 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:22:17.589000  376679 main.go:141] libmachine: Using API Version  1
	I0408 11:22:17.589021  376679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:22:17.589321  376679 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:22:17.589583  376679 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:22:17.591115  376679 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:22:17.591367  376679 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0408 11:22:17.591393  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:22:17.593870  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:17.594291  376679 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:22:17.594322  376679 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:22:17.594466  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:22:17.594675  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:22:17.594845  376679 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:22:17.594993  376679 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:22:19.558928  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.989236207s)
	I0408 11:22:19.558996  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.949647276s)
	I0408 11:22:19.559023  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559036  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559035  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559049  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559105  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.807021449s)
	I0408 11:22:19.559137  376679 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.774623212s)
	I0408 11:22:19.559145  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559130  376679 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.774546934s)
	I0408 11:22:19.559156  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559170  376679 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0408 11:22:19.559247  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.511670199s)
	I0408 11:22:19.559273  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559286  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559363  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.504735891s)
	I0408 11:22:19.559380  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559389  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559471  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.414382599s)
	I0408 11:22:19.559502  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559512  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559532  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.559552  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.559583  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.559590  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.559598  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559605  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559624  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.057916977s)
	I0408 11:22:19.559642  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559651  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559742  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.799284501s)
	I0408 11:22:19.559760  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559776  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559824  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.559835  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.559843  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559851  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.559913  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.559919  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.526082133s)
	I0408 11:22:19.559947  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.559956  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.559963  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.559970  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	W0408 11:22:19.559972  376679 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 11:22:19.559997  376679 retry.go:31] will retry after 302.144514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0408 11:22:19.560077  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.055952037s)
	I0408 11:22:19.560096  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.560104  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.560170  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.560193  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.560198  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.560452  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.560485  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.560501  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.560510  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.560518  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.560564  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.560584  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.560595  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.560603  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.560609  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.560261  376679 node_ready.go:35] waiting up to 6m0s for node "addons-825010" to be "Ready" ...
	I0408 11:22:19.560801  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.560849  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.560874  376679 addons.go:470] Verifying addon metrics-server=true in "addons-825010"
	I0408 11:22:19.562444  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.562484  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.562491  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.562500  376679 addons.go:470] Verifying addon ingress=true in "addons-825010"
	I0408 11:22:19.564781  376679 out.go:177] * Verifying ingress addon...
	I0408 11:22:19.562848  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.562850  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.562881  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.562891  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.562905  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.562932  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.562973  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.562993  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.563011  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.563031  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.563072  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.563101  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.566287  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.566305  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.566314  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.566318  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.566340  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.566363  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.566349  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.566367  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.566291  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.566369  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.566458  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.566577  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.566611  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.566625  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.566564  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:19.566644  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:19.566652  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.566701  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.566708  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.566714  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.568553  376679 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-825010 service yakd-dashboard -n yakd-dashboard
	
	I0408 11:22:19.566892  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.566924  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:19.566960  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.566984  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:19.567209  376679 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0408 11:22:19.570357  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.570408  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:19.570454  376679 addons.go:470] Verifying addon registry=true in "addons-825010"
	I0408 11:22:19.572272  376679 out.go:177] * Verifying registry addon...
	I0408 11:22:19.575005  376679 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0408 11:22:19.585903  376679 node_ready.go:49] node "addons-825010" has status "Ready":"True"
	I0408 11:22:19.585931  376679 node_ready.go:38] duration metric: took 25.299285ms for node "addons-825010" to be "Ready" ...
	I0408 11:22:19.585953  376679 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:22:19.587287  376679 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0408 11:22:19.587318  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:19.593572  376679 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0408 11:22:19.593597  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:19.601604  376679 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qpbgn" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.627971  376679 pod_ready.go:92] pod "coredns-76f75df574-qpbgn" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:19.628003  376679 pod_ready.go:81] duration metric: took 26.363227ms for pod "coredns-76f75df574-qpbgn" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.628017  376679 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-vdcfm" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.676288  376679 pod_ready.go:92] pod "coredns-76f75df574-vdcfm" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:19.676323  376679 pod_ready.go:81] duration metric: took 48.29686ms for pod "coredns-76f75df574-vdcfm" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.676339  376679 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-825010" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.710420  376679 pod_ready.go:92] pod "etcd-addons-825010" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:19.710453  376679 pod_ready.go:81] duration metric: took 34.105801ms for pod "etcd-addons-825010" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.710468  376679 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-825010" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.750193  376679 pod_ready.go:92] pod "kube-apiserver-addons-825010" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:19.750222  376679 pod_ready.go:81] duration metric: took 39.746887ms for pod "kube-apiserver-addons-825010" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.750233  376679 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-825010" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.863143  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0408 11:22:19.964742  376679 pod_ready.go:92] pod "kube-controller-manager-addons-825010" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:19.964771  376679 pod_ready.go:81] duration metric: took 214.531734ms for pod "kube-controller-manager-addons-825010" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:19.964788  376679 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5cw2t" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:20.064920  376679 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-825010" context rescaled to 1 replicas
	I0408 11:22:20.082306  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:20.082498  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:20.364948  376679 pod_ready.go:92] pod "kube-proxy-5cw2t" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:20.364981  376679 pod_ready.go:81] duration metric: took 400.184661ms for pod "kube-proxy-5cw2t" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:20.364995  376679 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-825010" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:20.584206  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:20.592347  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:20.768057  376679 pod_ready.go:92] pod "kube-scheduler-addons-825010" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:20.768087  376679 pod_ready.go:81] duration metric: took 403.083741ms for pod "kube-scheduler-addons-825010" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:20.768100  376679 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:21.082306  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:21.084566  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:21.453172  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.766429749s)
	I0408 11:22:21.453216  376679 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.86181872s)
	I0408 11:22:21.453246  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:21.453267  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:21.455135  376679 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0408 11:22:21.453617  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:21.453647  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:21.456414  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:21.457690  376679 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0408 11:22:21.456441  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:21.458839  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:21.458938  376679 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0408 11:22:21.458967  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0408 11:22:21.459150  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:21.459197  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:21.459206  376679 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:22:21.459216  376679 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-825010"
	I0408 11:22:21.460657  376679 out.go:177] * Verifying csi-hostpath-driver addon...
	I0408 11:22:21.462938  376679 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0408 11:22:21.508087  376679 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0408 11:22:21.508112  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:21.515447  376679 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0408 11:22:21.515481  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0408 11:22:21.575355  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:21.595841  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:21.627461  376679 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 11:22:21.627575  376679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0408 11:22:21.815595  376679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0408 11:22:21.970292  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:22.081569  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:22.083501  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:22.468722  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:22.576994  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:22.581538  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:22.776949  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:22.795040  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.931832644s)
	I0408 11:22:22.795124  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:22.795142  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:22.795463  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:22.795482  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:22.795491  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:22.795500  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:22.795790  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:22.795816  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:22.975859  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:23.122147  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:23.135618  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:23.228271  376679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.412629674s)
	I0408 11:22:23.228332  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:23.228388  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:23.228749  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:23.228776  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:23.228791  376679 main.go:141] libmachine: Making call to close driver server
	I0408 11:22:23.228801  376679 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:22:23.229086  376679 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:22:23.229106  376679 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:22:23.230112  376679 addons.go:470] Verifying addon gcp-auth=true in "addons-825010"
	I0408 11:22:23.232052  376679 out.go:177] * Verifying gcp-auth addon...
	I0408 11:22:23.234173  376679 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0408 11:22:23.275075  376679 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0408 11:22:23.275106  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:23.474919  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:23.575755  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:23.582013  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:23.738087  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:23.970295  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:24.075619  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:24.079589  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:24.240022  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:24.468382  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:24.574763  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:24.579046  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:24.738063  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:24.968806  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:25.074902  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:25.079188  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:25.239855  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:25.275371  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:25.468996  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:25.575647  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:25.580259  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:25.737956  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:25.969401  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:26.075505  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:26.079899  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:26.237709  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:26.478286  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:26.575126  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:26.579270  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:26.739932  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:27.252622  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:27.253074  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:27.260011  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:27.260258  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:27.275577  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:27.469025  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:27.574945  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:27.579069  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:27.739085  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:27.972000  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:28.075717  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:28.079866  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:28.239579  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:28.469129  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:28.576703  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:28.580293  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:28.931845  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:28.973052  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:29.075419  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:29.080775  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:29.241232  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:29.470043  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:29.575762  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:29.579705  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:29.738527  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:29.774447  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:29.969585  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:30.075907  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:30.080869  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:30.238519  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:30.469888  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:30.575039  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:30.579462  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:30.740086  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:30.969190  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:31.076267  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:31.081576  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:31.239780  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:31.469319  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:31.574817  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:31.582257  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:31.738884  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:31.774606  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:31.969516  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:32.075306  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:32.079276  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:32.238517  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:32.469304  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:32.574855  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:32.579587  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:32.737949  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:32.969704  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:33.075562  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:33.079787  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:33.238581  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:33.468801  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:33.576435  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:33.581478  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:33.738534  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:33.775227  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:33.969267  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:34.076452  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:34.079952  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:34.238908  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:34.469578  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:34.575392  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:34.579981  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:34.740054  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:34.973346  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:35.074850  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:35.079421  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:35.239961  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:35.470956  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:35.575561  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:35.579702  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:35.737561  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:35.776630  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:35.983175  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:36.077051  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:36.081263  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:36.240991  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:36.469458  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:36.575441  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:36.580134  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:36.738232  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:36.970862  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:37.076189  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:37.080651  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:37.239039  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:37.469954  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:37.575823  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:37.581107  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:37.738871  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:37.777847  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:37.969978  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:38.075866  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:38.079930  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:38.238371  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:38.469711  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:38.575529  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:38.579498  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:38.738208  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:38.970552  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:39.075877  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:39.084114  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:39.238607  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:39.468719  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:39.583357  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:39.584631  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:39.737751  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:39.968889  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:40.075489  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:40.079775  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:40.240152  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:40.277061  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:40.470241  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:40.578699  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:40.580459  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:40.739277  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:40.969774  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:41.076190  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:41.078889  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:41.240027  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:41.469852  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:41.575785  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:41.580285  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:41.739006  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:41.969935  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:42.078625  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:42.080370  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:42.240223  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:42.469495  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:42.575049  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:42.578936  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:42.738781  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:42.774858  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:42.969556  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:43.076836  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:43.084466  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:43.237692  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:43.468314  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:43.575104  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:43.579211  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:43.739075  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:43.969469  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:44.075175  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:44.079070  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:44.239163  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:44.470580  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:44.577460  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:44.585741  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:44.740275  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:44.777663  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:44.969787  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:45.074801  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:45.079287  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:45.238706  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:45.469377  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:45.575620  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:45.579383  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:46.055632  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:46.057999  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:46.079778  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:46.090084  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:46.238718  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:46.469930  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:46.575015  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:46.584213  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:46.738768  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:46.970207  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:47.086547  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:47.092053  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:47.242717  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:47.275899  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:47.471255  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:47.574901  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:47.579910  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:47.738693  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:47.969618  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:48.075389  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:48.081887  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:48.239014  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:48.469469  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:48.575390  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:48.579588  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:48.738868  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:48.970206  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:49.081043  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:49.089742  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:49.237936  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:49.469291  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:49.575324  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:49.580424  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:49.739285  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:49.775713  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:49.968291  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:50.080230  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:50.083185  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:50.238979  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:50.469387  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:50.575302  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:50.583241  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:50.738743  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:51.109289  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:51.109966  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:51.112856  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:51.238940  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:51.470477  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:51.576365  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:51.580133  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:51.738696  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:51.969079  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:52.075429  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:52.081842  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:52.238130  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:52.275008  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:52.470157  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:52.575448  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:52.579733  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:52.737762  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:52.969694  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:53.075902  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:53.079483  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:53.242224  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:53.469871  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:53.575328  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:53.579501  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:53.740188  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:53.969503  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:54.075891  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:54.079904  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:54.241891  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:54.275779  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:54.469555  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:54.576149  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:54.579777  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:54.738046  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:54.968983  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:55.075282  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:55.079568  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:55.238718  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:55.468764  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:55.575472  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:55.580150  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:55.739011  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:55.969824  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:56.075259  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:56.080790  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:56.238729  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:56.469168  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:56.575997  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:56.579760  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:56.739034  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:56.775515  376679 pod_ready.go:102] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"False"
	I0408 11:22:56.971602  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:57.074739  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:57.079998  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:57.238607  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:57.277072  376679 pod_ready.go:92] pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:57.277110  376679 pod_ready.go:81] duration metric: took 36.509000306s for pod "metrics-server-75d6c48ddd-zgtxw" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:57.277128  376679 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bh7lk" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:57.288987  376679 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bh7lk" in "kube-system" namespace has status "Ready":"True"
	I0408 11:22:57.289022  376679 pod_ready.go:81] duration metric: took 11.884136ms for pod "nvidia-device-plugin-daemonset-bh7lk" in "kube-system" namespace to be "Ready" ...
	I0408 11:22:57.289052  376679 pod_ready.go:38] duration metric: took 37.703085179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:22:57.289076  376679 api_server.go:52] waiting for apiserver process to appear ...
	I0408 11:22:57.289161  376679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:22:57.314808  376679 api_server.go:72] duration metric: took 47.417216231s to wait for apiserver process to appear ...
	I0408 11:22:57.314844  376679 api_server.go:88] waiting for apiserver healthz status ...
	I0408 11:22:57.314875  376679 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0408 11:22:57.319638  376679 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I0408 11:22:57.321243  376679 api_server.go:141] control plane version: v1.29.3
	I0408 11:22:57.321271  376679 api_server.go:131] duration metric: took 6.416092ms to wait for apiserver health ...
	I0408 11:22:57.321279  376679 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 11:22:57.330595  376679 system_pods.go:59] 18 kube-system pods found
	I0408 11:22:57.330633  376679 system_pods.go:61] "coredns-76f75df574-qpbgn" [0a324fee-6f37-465d-a076-ef028378364b] Running
	I0408 11:22:57.330645  376679 system_pods.go:61] "csi-hostpath-attacher-0" [ca3c2ab5-9a6c-487d-a2ff-1f2790ba60b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 11:22:57.330654  376679 system_pods.go:61] "csi-hostpath-resizer-0" [3e034817-a444-4c8e-b1d1-3d65462cc7cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 11:22:57.330662  376679 system_pods.go:61] "csi-hostpathplugin-89w2b" [6158ed88-c8bc-4ae5-9f1d-ee20cf23b683] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 11:22:57.330671  376679 system_pods.go:61] "etcd-addons-825010" [e8a83d2d-259b-4b4a-a9f7-52e10be24fda] Running
	I0408 11:22:57.330675  376679 system_pods.go:61] "kube-apiserver-addons-825010" [078e06f2-4ef3-4e93-9660-af2a1854b7dd] Running
	I0408 11:22:57.330678  376679 system_pods.go:61] "kube-controller-manager-addons-825010" [c881f282-62e6-4493-a31b-fc300a9dd4c7] Running
	I0408 11:22:57.330684  376679 system_pods.go:61] "kube-ingress-dns-minikube" [8f7ec1ca-982e-4e81-8c08-032b0284cbe1] Running
	I0408 11:22:57.330688  376679 system_pods.go:61] "kube-proxy-5cw2t" [a9f29e70-4aaf-4ebf-92ba-e7681b720359] Running
	I0408 11:22:57.330694  376679 system_pods.go:61] "kube-scheduler-addons-825010" [a5d6f884-f858-4e54-973b-1f594fdc5282] Running
	I0408 11:22:57.330697  376679 system_pods.go:61] "metrics-server-75d6c48ddd-zgtxw" [f4f27621-21f2-454c-82af-2b867ffac4e3] Running
	I0408 11:22:57.330700  376679 system_pods.go:61] "nvidia-device-plugin-daemonset-bh7lk" [112b1946-35f2-4c3c-ac13-d15c612bc3e9] Running
	I0408 11:22:57.330704  376679 system_pods.go:61] "registry-proxy-6sfpx" [7b943a40-c7a9-411f-911f-fb652b42547e] Running
	I0408 11:22:57.330710  376679 system_pods.go:61] "registry-qw4cl" [9f724ae4-733d-40e1-a150-764921001381] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 11:22:57.330718  376679 system_pods.go:61] "snapshot-controller-58dbcc7b99-nm2nw" [54f02b38-b623-4beb-81a0-39172b3cc537] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 11:22:57.330732  376679 system_pods.go:61] "snapshot-controller-58dbcc7b99-zzs9x" [d6397202-85e8-4b64-8b85-ebffa1c56287] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 11:22:57.330739  376679 system_pods.go:61] "storage-provisioner" [3418ee02-c4d8-4293-95cf-afccded1c797] Running
	I0408 11:22:57.330745  376679 system_pods.go:61] "tiller-deploy-7b677967b9-2d2hj" [b3fa7f26-5133-4ed3-a287-d04e374d1484] Running
	I0408 11:22:57.330752  376679 system_pods.go:74] duration metric: took 9.466363ms to wait for pod list to return data ...
	I0408 11:22:57.330761  376679 default_sa.go:34] waiting for default service account to be created ...
	I0408 11:22:57.333171  376679 default_sa.go:45] found service account: "default"
	I0408 11:22:57.333195  376679 default_sa.go:55] duration metric: took 2.423738ms for default service account to be created ...
	I0408 11:22:57.333203  376679 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 11:22:57.341964  376679 system_pods.go:86] 18 kube-system pods found
	I0408 11:22:57.341997  376679 system_pods.go:89] "coredns-76f75df574-qpbgn" [0a324fee-6f37-465d-a076-ef028378364b] Running
	I0408 11:22:57.342010  376679 system_pods.go:89] "csi-hostpath-attacher-0" [ca3c2ab5-9a6c-487d-a2ff-1f2790ba60b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0408 11:22:57.342018  376679 system_pods.go:89] "csi-hostpath-resizer-0" [3e034817-a444-4c8e-b1d1-3d65462cc7cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0408 11:22:57.342027  376679 system_pods.go:89] "csi-hostpathplugin-89w2b" [6158ed88-c8bc-4ae5-9f1d-ee20cf23b683] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0408 11:22:57.342034  376679 system_pods.go:89] "etcd-addons-825010" [e8a83d2d-259b-4b4a-a9f7-52e10be24fda] Running
	I0408 11:22:57.342039  376679 system_pods.go:89] "kube-apiserver-addons-825010" [078e06f2-4ef3-4e93-9660-af2a1854b7dd] Running
	I0408 11:22:57.342043  376679 system_pods.go:89] "kube-controller-manager-addons-825010" [c881f282-62e6-4493-a31b-fc300a9dd4c7] Running
	I0408 11:22:57.342047  376679 system_pods.go:89] "kube-ingress-dns-minikube" [8f7ec1ca-982e-4e81-8c08-032b0284cbe1] Running
	I0408 11:22:57.342051  376679 system_pods.go:89] "kube-proxy-5cw2t" [a9f29e70-4aaf-4ebf-92ba-e7681b720359] Running
	I0408 11:22:57.342055  376679 system_pods.go:89] "kube-scheduler-addons-825010" [a5d6f884-f858-4e54-973b-1f594fdc5282] Running
	I0408 11:22:57.342058  376679 system_pods.go:89] "metrics-server-75d6c48ddd-zgtxw" [f4f27621-21f2-454c-82af-2b867ffac4e3] Running
	I0408 11:22:57.342062  376679 system_pods.go:89] "nvidia-device-plugin-daemonset-bh7lk" [112b1946-35f2-4c3c-ac13-d15c612bc3e9] Running
	I0408 11:22:57.342066  376679 system_pods.go:89] "registry-proxy-6sfpx" [7b943a40-c7a9-411f-911f-fb652b42547e] Running
	I0408 11:22:57.342070  376679 system_pods.go:89] "registry-qw4cl" [9f724ae4-733d-40e1-a150-764921001381] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0408 11:22:57.342076  376679 system_pods.go:89] "snapshot-controller-58dbcc7b99-nm2nw" [54f02b38-b623-4beb-81a0-39172b3cc537] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 11:22:57.342086  376679 system_pods.go:89] "snapshot-controller-58dbcc7b99-zzs9x" [d6397202-85e8-4b64-8b85-ebffa1c56287] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0408 11:22:57.342090  376679 system_pods.go:89] "storage-provisioner" [3418ee02-c4d8-4293-95cf-afccded1c797] Running
	I0408 11:22:57.342094  376679 system_pods.go:89] "tiller-deploy-7b677967b9-2d2hj" [b3fa7f26-5133-4ed3-a287-d04e374d1484] Running
	I0408 11:22:57.342100  376679 system_pods.go:126] duration metric: took 8.891787ms to wait for k8s-apps to be running ...
	I0408 11:22:57.342108  376679 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 11:22:57.342155  376679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:22:57.358300  376679 system_svc.go:56] duration metric: took 16.178962ms WaitForService to wait for kubelet
	I0408 11:22:57.358333  376679 kubeadm.go:576] duration metric: took 47.460749343s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:22:57.358354  376679 node_conditions.go:102] verifying NodePressure condition ...
	I0408 11:22:57.362427  376679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:22:57.362458  376679 node_conditions.go:123] node cpu capacity is 2
	I0408 11:22:57.362472  376679 node_conditions.go:105] duration metric: took 4.113461ms to run NodePressure ...
	I0408 11:22:57.362483  376679 start.go:240] waiting for startup goroutines ...
	I0408 11:22:57.469176  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:57.575214  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:57.579285  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:57.738845  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:57.970475  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:58.075415  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:58.084337  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:58.239391  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:58.469298  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:58.575505  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:58.579996  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:58.738536  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:58.969256  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:59.076122  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:59.081303  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:59.238549  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:59.468538  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:22:59.575157  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:22:59.579467  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:22:59.739022  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:22:59.969728  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:00.074778  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:00.079546  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:00.238975  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:00.468668  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:00.575335  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:00.580423  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:00.738825  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:00.969172  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:01.075665  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:01.080243  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:01.238779  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:01.469253  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:01.576088  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:01.582028  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:01.738090  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:01.972976  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:02.075793  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:02.079899  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:02.239007  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:02.470660  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:02.580265  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:02.581832  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:02.738652  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:02.969389  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:03.075799  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:03.080056  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:03.239516  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:03.469503  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:03.575538  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:03.580185  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:03.738691  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:03.969418  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:04.080147  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:04.081231  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:04.241684  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:04.469101  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:04.575571  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:04.579895  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:04.738317  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:04.968991  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:05.075230  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:05.079138  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:05.238407  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:05.470703  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:05.575596  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:05.581283  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:05.740733  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:05.971076  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:06.076662  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:06.090934  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:06.238684  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:06.468661  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:06.575457  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:06.579846  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:06.739302  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:06.969022  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:07.077234  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:07.080961  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:07.238852  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:07.470504  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:07.575814  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:07.599931  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:07.738651  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:07.969625  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:08.075584  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:08.079839  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:08.238509  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:08.469983  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:08.576588  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:08.579977  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:08.738897  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:08.971768  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:09.078182  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:09.082379  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:09.243304  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:09.479804  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:09.575437  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:09.580625  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:09.737829  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:09.968893  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:10.075373  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:10.079884  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:10.238636  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:10.469108  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:10.575757  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:10.580082  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:10.738816  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:10.969373  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:11.075310  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:11.080027  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:11.240206  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:11.468748  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:11.575330  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:11.583361  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:11.738544  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:11.969348  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:12.075678  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:12.080771  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0408 11:23:12.238881  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:12.473179  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:12.576269  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:12.579331  376679 kapi.go:107] duration metric: took 53.004328857s to wait for kubernetes.io/minikube-addons=registry ...
	I0408 11:23:12.740220  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:12.969629  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:13.075513  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:13.238738  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:13.469797  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:13.576074  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:13.739563  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:13.968524  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:14.075219  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:14.238394  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:14.469532  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:14.576005  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:14.737912  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:14.969340  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:15.075266  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:15.238183  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:15.469442  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:15.575344  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:15.737932  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:15.969574  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:16.075865  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:16.237884  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:16.470271  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:16.578463  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:16.742023  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:16.969894  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:17.076693  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:17.239123  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:17.470223  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:17.575811  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:17.741096  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:17.969559  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:18.075315  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:18.238689  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:18.469457  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:18.576070  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:18.738957  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:18.970270  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:19.075897  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:19.237878  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:19.469471  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:19.574979  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:19.738323  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:19.968779  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:20.076074  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:20.238743  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:20.469778  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:20.575479  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:20.739058  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:20.971568  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:21.075191  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:21.238297  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:21.469089  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:21.575865  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:21.738222  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:21.968127  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:22.075797  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:22.238912  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:22.474027  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:22.576866  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:22.738250  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:22.969039  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:23.075849  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:23.238149  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:23.473086  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:23.575693  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:23.738100  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:23.969696  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:24.080773  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:24.241054  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:24.472805  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:24.575708  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:24.739244  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:24.970724  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:25.077685  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:25.247087  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:25.494761  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:25.583489  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:25.741077  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:25.973683  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:26.105267  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:26.238319  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:26.469002  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:26.576030  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:26.738233  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:26.971600  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:27.075433  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:27.237805  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:27.470602  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:27.576593  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:27.738308  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:27.969461  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:28.076644  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:28.238320  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:28.471226  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:28.580301  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:28.738770  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:28.976034  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:29.075700  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:29.238878  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:29.471947  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:29.575972  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:29.738026  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:29.968450  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:30.075286  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:30.238213  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:30.469280  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:30.575109  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:30.738254  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:30.969336  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:31.077163  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:31.238416  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:31.542066  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:31.575578  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:31.739279  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:31.973780  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:32.077022  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:32.238618  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:32.469476  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:32.575842  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:32.737716  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:32.969810  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:33.076299  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:33.242058  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:33.473672  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:33.575510  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:33.738458  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:33.969225  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:34.081395  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:34.238894  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:34.470229  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:34.575997  376679 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0408 11:23:34.748460  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:35.382554  376679 kapi.go:107] duration metric: took 1m15.815339919s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0408 11:23:35.383208  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:35.383955  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:35.473390  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:35.739510  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:35.976254  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:36.238714  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:36.470049  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:36.739538  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:36.969847  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:37.238173  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:37.468643  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:37.739296  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:37.968857  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:38.237756  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:38.469566  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:38.738916  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:38.975532  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:39.238833  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:39.473563  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:39.740047  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0408 11:23:39.968957  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:40.239650  376679 kapi.go:107] duration metric: took 1m17.005473132s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0408 11:23:40.241357  376679 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-825010 cluster.
	I0408 11:23:40.242672  376679 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0408 11:23:40.244372  376679 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0408 11:23:40.472865  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:40.969355  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:41.468688  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:41.970305  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:42.470313  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:42.969488  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:43.468846  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:43.969919  376679 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0408 11:23:44.469847  376679 kapi.go:107] duration metric: took 1m23.006901857s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0408 11:23:44.472106  376679 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner-rancher, ingress-dns, metrics-server, storage-provisioner, helm-tiller, inspektor-gadget, cloud-spanner, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0408 11:23:44.473712  376679 addons.go:505] duration metric: took 1m34.576077118s for enable addons: enabled=[nvidia-device-plugin storage-provisioner-rancher ingress-dns metrics-server storage-provisioner helm-tiller inspektor-gadget cloud-spanner yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0408 11:23:44.473764  376679 start.go:245] waiting for cluster config update ...
	I0408 11:23:44.473790  376679 start.go:254] writing updated cluster config ...
	I0408 11:23:44.474117  376679 ssh_runner.go:195] Run: rm -f paused
	I0408 11:23:44.528081  376679 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 11:23:44.530020  376679 out.go:177] * Done! kubectl is now configured to use "addons-825010" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.894991326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712575604894963013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573339,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8f213af-2553-418a-8cb8-bbe8d28dd6f0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.895911423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6f6c596-dae1-4178-8dce-53e68b8e2421 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.896001490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6f6c596-dae1-4178-8dce-53e68b8e2421 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.896342276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca4bdb40007e260a1a4a2431ad34d868f060b7a6d73cfb990bf9ebebc6970765,PodSandboxId:a63f579747b817bdac88c714aac165008995b979e01c1bd5553d6a6c13b8c14b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712575598457098650,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-9bc5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f97160e-f04d-4a21-9e9a-a8b73c5eb3c9,},Annotations:map[string]string{io.kubernetes.container.hash: ffd70863,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1451c858904b8479e38a7ebca410987551439b2faed01e46ed3f4bc777b58d68,PodSandboxId:bcbb392c0d2d877d49b3625fe7154f3ee4c86ab820aebc93ec2e402792d7334e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712575457808341488,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8947abd8-1bf9-4645-ab59-b86e4e1fd8f3,},Annotations:map[string]string{io.kubern
etes.container.hash: 72e6ea38,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befa7fcf31bb131b51e90fd043c5391b4d336e8ef35feb04400881fcd4a3aa56,PodSandboxId:6e344786315ff9466c62aafbbb7034333a8e15cb6fdd5123974e99548c827284,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712575446223020491,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-qvbrm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 554a155a-0a39-4a68-ae77-ee5f6e56c84d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f94f4d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6afb955413041bf758fec26bdaab6f08a0d13aeabdc383f8b2b35dc935ddfb6,PodSandboxId:f87e8e636924f414ed233a4e9d1c0a6507995be0588ee546565bbb5e922e7927,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712575419284093449,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-7jx4w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 39502dff-f0bf-485b-a601-4aeb1cf953df,},Annotations:map[string]string{io.kubernetes.container.hash: 7b029b40,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e249981beea8478e1a2feea02e3366f44290279441cd13f3d0f433aec8988a,PodSandboxId:086b7981e8328c6a736c9ada2fc45f7cd3c303b1e00c724afcfe30d2952a0070,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1712575399194286571,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rw4lc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea220305-3656-4d32-9a31-074f2eebb5a2,},Annotations:map[string]string{io.kubernetes.container.hash: 88150fd0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c140f199c40c905a1c938e69e3a94792516aee769e959670f50c4d11ba9bb8,PodSandboxId:d7e5f859134e18c824dbbbb03808442325ffafa5e632cb09fd6d8b21121a51d4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712575399033701476,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4qwzn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdf1c531-18c0-425a-8737-b4f7ca7f0bc3,},Annotations:map[string]string{io.kubernetes.container.hash: 67dfb29b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347d2a71b10f366726309bf7c113da87b08c4ae735ac7e6d5b8a0c6af9024e55,PodSandboxId:dd06e9c5c9ac13ad1c8e4f967dc0c9847f0eebe622606f77562edd0c8f8f7525,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1712575394591205224,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-xks79,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bd857ebb-24e9-4443-aac8-7cb6caaf2d95,},Annotations:map[string]string{io.kubernetes.container.hash: 4941c183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9626445195cc4bc97a6aac99387e663b5c8e7e5b3320f6ed408727de1dd081d,PodSandboxId:1c77a4abdee9c404bbd0367619bf53c3cb6991a758d7512bba727e590b93948b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1712575366184499629,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-grh7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: b6e7f495-597b-4ef6-8f69-6ae1669946e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6566e3b8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e86982f1e796c6fb88da9f02fbc515f1b69b4bb14f74d9f0f380d85de01bc9b,PodSandboxId:61be60a05e09db859e4371aa91fcaf64998280b543adcb974894e5bb2f7ab434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712575337177068032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3418ee02-c4d8-4293-95cf-afccded1c797,},Annotations:map[string]string{io.kubernetes.container.hash: e4ed8a58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a73c399acbefb364c1871a416a6144741b1bb7a04019effebef2e961d74c8669,PodSandboxId:7f374ee5aeb529996c0d3e6ad3bcfb0f172c4a0f27ae5defe8311b3222a852fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0
0797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712575334225198678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qpbgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a324fee-6f37-465d-a076-ef028378364b,},Annotations:map[string]string{io.kubernetes.container.hash: cc9ba681,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae48f9611ea67e7429c3505a06d0c878caffd67bc5b605ca859d784637c3b75d,PodSa
ndboxId:8ca405d7f505d3bc6b82a66538f955c6d6c710e275a697763172d506ad584e4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712575331902035653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cw2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f29e70-4aaf-4ebf-92ba-e7681b720359,},Annotations:map[string]string{io.kubernetes.container.hash: 67e78bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e38e40c07dd63569d918af822cbee1f79b12b6de3243115af2e5ecda6c653da,PodSandboxId:552b7b39f0a03f33562146372
23c1aab156cec545fe368958bd06b2ac8fd04c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712575311085056153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f62afebfc767fe33788de653f385281,},Annotations:map[string]string{io.kubernetes.container.hash: 332f9526,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e0c6fe2314977525a7f7fa03a5fa44f3ba7b8d626acffb94e2e838d2a5702e,PodSandboxId:6ca058954ef1b9be800cdd703cd6f7e0e7552fbf9ca5ecbd80e925b285919e52,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712575311077880872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8143dd82d19ca09434801e7cd0aca27e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7236bc85f77885f4f6c51735e3bd8354dccd9df6134d01eb4bc1a5eae1a1094c,PodSandboxId:9da688d75b13a3f749ce27fbcd44fe305347f9a14105a1b3b0520af49596cf58,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712575311029389358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc6dcca4f205513926d7763c72bdf856,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baed11d40e558b51e4b656a6e7f689097634b02958c77a858e4bbb27defa799d,PodSandboxId:61fae539137472d0aacea7e4e477064de46cfca1ca0c90d256990efcc84640ba,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712575311013188630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a9e986a1d2499aca80a4b50e35d44f6,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9962c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6f6c596-dae1-4178-8dce-53e68b8e2421 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.940468246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05393194-5668-40d7-be1d-97a1a6ef32e0 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.940542869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05393194-5668-40d7-be1d-97a1a6ef32e0 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.942124575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a5fcded-5cba-4fe8-adb3-916e37d953b1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.944254945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712575604944223762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573339,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a5fcded-5cba-4fe8-adb3-916e37d953b1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.945137576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60811e94-d2ec-481f-8acd-7b619e5ec077 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.945191168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60811e94-d2ec-481f-8acd-7b619e5ec077 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.946088848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca4bdb40007e260a1a4a2431ad34d868f060b7a6d73cfb990bf9ebebc6970765,PodSandboxId:a63f579747b817bdac88c714aac165008995b979e01c1bd5553d6a6c13b8c14b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712575598457098650,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-9bc5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f97160e-f04d-4a21-9e9a-a8b73c5eb3c9,},Annotations:map[string]string{io.kubernetes.container.hash: ffd70863,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1451c858904b8479e38a7ebca410987551439b2faed01e46ed3f4bc777b58d68,PodSandboxId:bcbb392c0d2d877d49b3625fe7154f3ee4c86ab820aebc93ec2e402792d7334e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712575457808341488,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8947abd8-1bf9-4645-ab59-b86e4e1fd8f3,},Annotations:map[string]string{io.kubern
etes.container.hash: 72e6ea38,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befa7fcf31bb131b51e90fd043c5391b4d336e8ef35feb04400881fcd4a3aa56,PodSandboxId:6e344786315ff9466c62aafbbb7034333a8e15cb6fdd5123974e99548c827284,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712575446223020491,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-qvbrm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 554a155a-0a39-4a68-ae77-ee5f6e56c84d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f94f4d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6afb955413041bf758fec26bdaab6f08a0d13aeabdc383f8b2b35dc935ddfb6,PodSandboxId:f87e8e636924f414ed233a4e9d1c0a6507995be0588ee546565bbb5e922e7927,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712575419284093449,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-7jx4w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 39502dff-f0bf-485b-a601-4aeb1cf953df,},Annotations:map[string]string{io.kubernetes.container.hash: 7b029b40,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e249981beea8478e1a2feea02e3366f44290279441cd13f3d0f433aec8988a,PodSandboxId:086b7981e8328c6a736c9ada2fc45f7cd3c303b1e00c724afcfe30d2952a0070,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1712575399194286571,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rw4lc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea220305-3656-4d32-9a31-074f2eebb5a2,},Annotations:map[string]string{io.kubernetes.container.hash: 88150fd0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c140f199c40c905a1c938e69e3a94792516aee769e959670f50c4d11ba9bb8,PodSandboxId:d7e5f859134e18c824dbbbb03808442325ffafa5e632cb09fd6d8b21121a51d4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712575399033701476,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4qwzn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdf1c531-18c0-425a-8737-b4f7ca7f0bc3,},Annotations:map[string]string{io.kubernetes.container.hash: 67dfb29b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347d2a71b10f366726309bf7c113da87b08c4ae735ac7e6d5b8a0c6af9024e55,PodSandboxId:dd06e9c5c9ac13ad1c8e4f967dc0c9847f0eebe622606f77562edd0c8f8f7525,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1712575394591205224,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-xks79,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bd857ebb-24e9-4443-aac8-7cb6caaf2d95,},Annotations:map[string]string{io.kubernetes.container.hash: 4941c183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9626445195cc4bc97a6aac99387e663b5c8e7e5b3320f6ed408727de1dd081d,PodSandboxId:1c77a4abdee9c404bbd0367619bf53c3cb6991a758d7512bba727e590b93948b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1712575366184499629,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-grh7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: b6e7f495-597b-4ef6-8f69-6ae1669946e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6566e3b8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e86982f1e796c6fb88da9f02fbc515f1b69b4bb14f74d9f0f380d85de01bc9b,PodSandboxId:61be60a05e09db859e4371aa91fcaf64998280b543adcb974894e5bb2f7ab434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712575337177068032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3418ee02-c4d8-4293-95cf-afccded1c797,},Annotations:map[string]string{io.kubernetes.container.hash: e4ed8a58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a73c399acbefb364c1871a416a6144741b1bb7a04019effebef2e961d74c8669,PodSandboxId:7f374ee5aeb529996c0d3e6ad3bcfb0f172c4a0f27ae5defe8311b3222a852fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0
0797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712575334225198678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qpbgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a324fee-6f37-465d-a076-ef028378364b,},Annotations:map[string]string{io.kubernetes.container.hash: cc9ba681,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae48f9611ea67e7429c3505a06d0c878caffd67bc5b605ca859d784637c3b75d,PodSa
ndboxId:8ca405d7f505d3bc6b82a66538f955c6d6c710e275a697763172d506ad584e4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712575331902035653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cw2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f29e70-4aaf-4ebf-92ba-e7681b720359,},Annotations:map[string]string{io.kubernetes.container.hash: 67e78bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e38e40c07dd63569d918af822cbee1f79b12b6de3243115af2e5ecda6c653da,PodSandboxId:552b7b39f0a03f33562146372
23c1aab156cec545fe368958bd06b2ac8fd04c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712575311085056153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f62afebfc767fe33788de653f385281,},Annotations:map[string]string{io.kubernetes.container.hash: 332f9526,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e0c6fe2314977525a7f7fa03a5fa44f3ba7b8d626acffb94e2e838d2a5702e,PodSandboxId:6ca058954ef1b9be800cdd703cd6f7e0e7552fbf9ca5ecbd80e925b285919e52,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712575311077880872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8143dd82d19ca09434801e7cd0aca27e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7236bc85f77885f4f6c51735e3bd8354dccd9df6134d01eb4bc1a5eae1a1094c,PodSandboxId:9da688d75b13a3f749ce27fbcd44fe305347f9a14105a1b3b0520af49596cf58,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712575311029389358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc6dcca4f205513926d7763c72bdf856,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baed11d40e558b51e4b656a6e7f689097634b02958c77a858e4bbb27defa799d,PodSandboxId:61fae539137472d0aacea7e4e477064de46cfca1ca0c90d256990efcc84640ba,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712575311013188630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a9e986a1d2499aca80a4b50e35d44f6,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9962c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60811e94-d2ec-481f-8acd-7b619e5ec077 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.990726692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73a31c89-c934-4151-ba38-e6400be47cc8 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.990877074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73a31c89-c934-4151-ba38-e6400be47cc8 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.992323406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0038d076-459e-4005-b2a9-def3cf3f6e94 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.993528042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712575604993497429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573339,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0038d076-459e-4005-b2a9-def3cf3f6e94 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.994418435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74db3f0d-0a23-4c1a-b7e8-ae12524eccd3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.994479015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74db3f0d-0a23-4c1a-b7e8-ae12524eccd3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:44 addons-825010 crio[683]: time="2024-04-08 11:26:44.994895501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca4bdb40007e260a1a4a2431ad34d868f060b7a6d73cfb990bf9ebebc6970765,PodSandboxId:a63f579747b817bdac88c714aac165008995b979e01c1bd5553d6a6c13b8c14b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712575598457098650,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-9bc5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f97160e-f04d-4a21-9e9a-a8b73c5eb3c9,},Annotations:map[string]string{io.kubernetes.container.hash: ffd70863,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1451c858904b8479e38a7ebca410987551439b2faed01e46ed3f4bc777b58d68,PodSandboxId:bcbb392c0d2d877d49b3625fe7154f3ee4c86ab820aebc93ec2e402792d7334e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712575457808341488,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8947abd8-1bf9-4645-ab59-b86e4e1fd8f3,},Annotations:map[string]string{io.kubern
etes.container.hash: 72e6ea38,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befa7fcf31bb131b51e90fd043c5391b4d336e8ef35feb04400881fcd4a3aa56,PodSandboxId:6e344786315ff9466c62aafbbb7034333a8e15cb6fdd5123974e99548c827284,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712575446223020491,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-qvbrm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 554a155a-0a39-4a68-ae77-ee5f6e56c84d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f94f4d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6afb955413041bf758fec26bdaab6f08a0d13aeabdc383f8b2b35dc935ddfb6,PodSandboxId:f87e8e636924f414ed233a4e9d1c0a6507995be0588ee546565bbb5e922e7927,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712575419284093449,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-7jx4w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 39502dff-f0bf-485b-a601-4aeb1cf953df,},Annotations:map[string]string{io.kubernetes.container.hash: 7b029b40,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e249981beea8478e1a2feea02e3366f44290279441cd13f3d0f433aec8988a,PodSandboxId:086b7981e8328c6a736c9ada2fc45f7cd3c303b1e00c724afcfe30d2952a0070,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1712575399194286571,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rw4lc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea220305-3656-4d32-9a31-074f2eebb5a2,},Annotations:map[string]string{io.kubernetes.container.hash: 88150fd0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c140f199c40c905a1c938e69e3a94792516aee769e959670f50c4d11ba9bb8,PodSandboxId:d7e5f859134e18c824dbbbb03808442325ffafa5e632cb09fd6d8b21121a51d4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712575399033701476,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4qwzn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdf1c531-18c0-425a-8737-b4f7ca7f0bc3,},Annotations:map[string]string{io.kubernetes.container.hash: 67dfb29b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347d2a71b10f366726309bf7c113da87b08c4ae735ac7e6d5b8a0c6af9024e55,PodSandboxId:dd06e9c5c9ac13ad1c8e4f967dc0c9847f0eebe622606f77562edd0c8f8f7525,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1712575394591205224,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-xks79,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bd857ebb-24e9-4443-aac8-7cb6caaf2d95,},Annotations:map[string]string{io.kubernetes.container.hash: 4941c183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9626445195cc4bc97a6aac99387e663b5c8e7e5b3320f6ed408727de1dd081d,PodSandboxId:1c77a4abdee9c404bbd0367619bf53c3cb6991a758d7512bba727e590b93948b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1712575366184499629,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-grh7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: b6e7f495-597b-4ef6-8f69-6ae1669946e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6566e3b8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e86982f1e796c6fb88da9f02fbc515f1b69b4bb14f74d9f0f380d85de01bc9b,PodSandboxId:61be60a05e09db859e4371aa91fcaf64998280b543adcb974894e5bb2f7ab434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712575337177068032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3418ee02-c4d8-4293-95cf-afccded1c797,},Annotations:map[string]string{io.kubernetes.container.hash: e4ed8a58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a73c399acbefb364c1871a416a6144741b1bb7a04019effebef2e961d74c8669,PodSandboxId:7f374ee5aeb529996c0d3e6ad3bcfb0f172c4a0f27ae5defe8311b3222a852fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0
0797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712575334225198678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qpbgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a324fee-6f37-465d-a076-ef028378364b,},Annotations:map[string]string{io.kubernetes.container.hash: cc9ba681,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae48f9611ea67e7429c3505a06d0c878caffd67bc5b605ca859d784637c3b75d,PodSa
ndboxId:8ca405d7f505d3bc6b82a66538f955c6d6c710e275a697763172d506ad584e4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712575331902035653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cw2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f29e70-4aaf-4ebf-92ba-e7681b720359,},Annotations:map[string]string{io.kubernetes.container.hash: 67e78bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e38e40c07dd63569d918af822cbee1f79b12b6de3243115af2e5ecda6c653da,PodSandboxId:552b7b39f0a03f33562146372
23c1aab156cec545fe368958bd06b2ac8fd04c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712575311085056153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f62afebfc767fe33788de653f385281,},Annotations:map[string]string{io.kubernetes.container.hash: 332f9526,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e0c6fe2314977525a7f7fa03a5fa44f3ba7b8d626acffb94e2e838d2a5702e,PodSandboxId:6ca058954ef1b9be800cdd703cd6f7e0e7552fbf9ca5ecbd80e925b285919e52,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712575311077880872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8143dd82d19ca09434801e7cd0aca27e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7236bc85f77885f4f6c51735e3bd8354dccd9df6134d01eb4bc1a5eae1a1094c,PodSandboxId:9da688d75b13a3f749ce27fbcd44fe305347f9a14105a1b3b0520af49596cf58,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712575311029389358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc6dcca4f205513926d7763c72bdf856,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baed11d40e558b51e4b656a6e7f689097634b02958c77a858e4bbb27defa799d,PodSandboxId:61fae539137472d0aacea7e4e477064de46cfca1ca0c90d256990efcc84640ba,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712575311013188630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a9e986a1d2499aca80a4b50e35d44f6,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9962c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74db3f0d-0a23-4c1a-b7e8-ae12524eccd3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:45 addons-825010 crio[683]: time="2024-04-08 11:26:45.036961858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed721acb-e85f-42c7-a112-c9aaba7e5fbd name=/runtime.v1.RuntimeService/Version
	Apr 08 11:26:45 addons-825010 crio[683]: time="2024-04-08 11:26:45.037040627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed721acb-e85f-42c7-a112-c9aaba7e5fbd name=/runtime.v1.RuntimeService/Version
	Apr 08 11:26:45 addons-825010 crio[683]: time="2024-04-08 11:26:45.038247581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afaed2a0-0d1e-40d1-b910-ad12abe380d8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:26:45 addons-825010 crio[683]: time="2024-04-08 11:26:45.039693210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712575605039664245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573339,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afaed2a0-0d1e-40d1-b910-ad12abe380d8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:26:45 addons-825010 crio[683]: time="2024-04-08 11:26:45.040321974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84127a83-d0c4-4988-b909-9433f671223f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:45 addons-825010 crio[683]: time="2024-04-08 11:26:45.040410632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84127a83-d0c4-4988-b909-9433f671223f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:26:45 addons-825010 crio[683]: time="2024-04-08 11:26:45.040864328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca4bdb40007e260a1a4a2431ad34d868f060b7a6d73cfb990bf9ebebc6970765,PodSandboxId:a63f579747b817bdac88c714aac165008995b979e01c1bd5553d6a6c13b8c14b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1712575598457098650,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-9bc5n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f97160e-f04d-4a21-9e9a-a8b73c5eb3c9,},Annotations:map[string]string{io.kubernetes.container.hash: ffd70863,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1451c858904b8479e38a7ebca410987551439b2faed01e46ed3f4bc777b58d68,PodSandboxId:bcbb392c0d2d877d49b3625fe7154f3ee4c86ab820aebc93ec2e402792d7334e,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1712575457808341488,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8947abd8-1bf9-4645-ab59-b86e4e1fd8f3,},Annotations:map[string]string{io.kubern
etes.container.hash: 72e6ea38,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:befa7fcf31bb131b51e90fd043c5391b4d336e8ef35feb04400881fcd4a3aa56,PodSandboxId:6e344786315ff9466c62aafbbb7034333a8e15cb6fdd5123974e99548c827284,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1712575446223020491,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-qvbrm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 554a155a-0a39-4a68-ae77-ee5f6e56c84d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f94f4d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6afb955413041bf758fec26bdaab6f08a0d13aeabdc383f8b2b35dc935ddfb6,PodSandboxId:f87e8e636924f414ed233a4e9d1c0a6507995be0588ee546565bbb5e922e7927,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1712575419284093449,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-7jx4w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 39502dff-f0bf-485b-a601-4aeb1cf953df,},Annotations:map[string]string{io.kubernetes.container.hash: 7b029b40,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e249981beea8478e1a2feea02e3366f44290279441cd13f3d0f433aec8988a,PodSandboxId:086b7981e8328c6a736c9ada2fc45f7cd3c303b1e00c724afcfe30d2952a0070,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1712575399194286571,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rw4lc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ea220305-3656-4d32-9a31-074f2eebb5a2,},Annotations:map[string]string{io.kubernetes.container.hash: 88150fd0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c140f199c40c905a1c938e69e3a94792516aee769e959670f50c4d11ba9bb8,PodSandboxId:d7e5f859134e18c824dbbbb03808442325ffafa5e632cb09fd6d8b21121a51d4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1712575399033701476,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4qwzn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdf1c531-18c0-425a-8737-b4f7ca7f0bc3,},Annotations:map[string]string{io.kubernetes.container.hash: 67dfb29b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347d2a71b10f366726309bf7c113da87b08c4ae735ac7e6d5b8a0c6af9024e55,PodSandboxId:dd06e9c5c9ac13ad1c8e4f967dc0c9847f0eebe622606f77562edd0c8f8f7525,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1712575394591205224,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-xks79,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bd857ebb-24e9-4443-aac8-7cb6caaf2d95,},Annotations:map[string]string{io.kubernetes.container.hash: 4941c183,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9626445195cc4bc97a6aac99387e663b5c8e7e5b3320f6ed408727de1dd081d,PodSandboxId:1c77a4abdee9c404bbd0367619bf53c3cb6991a758d7512bba727e590b93948b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1712575366184499629,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-grh7b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: b6e7f495-597b-4ef6-8f69-6ae1669946e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6566e3b8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e86982f1e796c6fb88da9f02fbc515f1b69b4bb14f74d9f0f380d85de01bc9b,PodSandboxId:61be60a05e09db859e4371aa91fcaf64998280b543adcb974894e5bb2f7ab434,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712575337177068032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3418ee02-c4d8-4293-95cf-afccded1c797,},Annotations:map[string]string{io.kubernetes.container.hash: e4ed8a58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a73c399acbefb364c1871a416a6144741b1bb7a04019effebef2e961d74c8669,PodSandboxId:7f374ee5aeb529996c0d3e6ad3bcfb0f172c4a0f27ae5defe8311b3222a852fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0
0797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712575334225198678,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-qpbgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a324fee-6f37-465d-a076-ef028378364b,},Annotations:map[string]string{io.kubernetes.container.hash: cc9ba681,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae48f9611ea67e7429c3505a06d0c878caffd67bc5b605ca859d784637c3b75d,PodSa
ndboxId:8ca405d7f505d3bc6b82a66538f955c6d6c710e275a697763172d506ad584e4a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712575331902035653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5cw2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9f29e70-4aaf-4ebf-92ba-e7681b720359,},Annotations:map[string]string{io.kubernetes.container.hash: 67e78bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e38e40c07dd63569d918af822cbee1f79b12b6de3243115af2e5ecda6c653da,PodSandboxId:552b7b39f0a03f33562146372
23c1aab156cec545fe368958bd06b2ac8fd04c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712575311085056153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f62afebfc767fe33788de653f385281,},Annotations:map[string]string{io.kubernetes.container.hash: 332f9526,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98e0c6fe2314977525a7f7fa03a5fa44f3ba7b8d626acffb94e2e838d2a5702e,PodSandboxId:6ca058954ef1b9be800cdd703cd6f7e0e7552fbf9ca5ecbd80e925b285919e52,Metadat
a:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712575311077880872,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8143dd82d19ca09434801e7cd0aca27e,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7236bc85f77885f4f6c51735e3bd8354dccd9df6134d01eb4bc1a5eae1a1094c,PodSandboxId:9da688d75b13a3f749ce27fbcd44fe305347f9a14105a1b3b0520af49596cf58,Metadata:&ContainerMetad
ata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712575311029389358,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc6dcca4f205513926d7763c72bdf856,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baed11d40e558b51e4b656a6e7f689097634b02958c77a858e4bbb27defa799d,PodSandboxId:61fae539137472d0aacea7e4e477064de46cfca1ca0c90d256990efcc84640ba,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712575311013188630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-825010,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a9e986a1d2499aca80a4b50e35d44f6,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9962c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84127a83-d0c4-4988-b909-9433f671223f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ca4bdb40007e2       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago       Running             hello-world-app           0                   a63f579747b81       hello-world-app-5d77478584-9bc5n
	1451c858904b8       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   bcbb392c0d2d8       nginx
	befa7fcf31bb1       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        2 minutes ago       Running             headlamp                  0                   6e344786315ff       headlamp-5b77dbd7c4-qvbrm
	a6afb95541304       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   f87e8e636924f       gcp-auth-7d69788767-7jx4w
	48e249981beea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   086b7981e8328       ingress-nginx-admission-patch-rw4lc
	88c140f199c40       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   d7e5f859134e1       ingress-nginx-admission-create-4qwzn
	347d2a71b10f3       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   dd06e9c5c9ac1       local-path-provisioner-78b46b4d5c-xks79
	c9626445195cc       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   1c77a4abdee9c       yakd-dashboard-9947fc6bf-grh7b
	0e86982f1e796       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   61be60a05e09d       storage-provisioner
	a73c399acbefb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   7f374ee5aeb52       coredns-76f75df574-qpbgn
	ae48f9611ea67       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                             4 minutes ago       Running             kube-proxy                0                   8ca405d7f505d       kube-proxy-5cw2t
	1e38e40c07dd6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   552b7b39f0a03       etcd-addons-825010
	98e0c6fe23149       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                             4 minutes ago       Running             kube-scheduler            0                   6ca058954ef1b       kube-scheduler-addons-825010
	7236bc85f7788       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                             4 minutes ago       Running             kube-controller-manager   0                   9da688d75b13a       kube-controller-manager-addons-825010
	baed11d40e558       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                             4 minutes ago       Running             kube-apiserver            0                   61fae53913747       kube-apiserver-addons-825010
	
	
	==> coredns [a73c399acbefb364c1871a416a6144741b1bb7a04019effebef2e961d74c8669] <==
	[INFO] 10.244.0.9:47702 - 64626 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000086295s
	[INFO] 10.244.0.9:55382 - 50994 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062127s
	[INFO] 10.244.0.9:55382 - 50737 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000198124s
	[INFO] 10.244.0.9:33414 - 38343 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004923s
	[INFO] 10.244.0.9:33414 - 26822 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115305s
	[INFO] 10.244.0.9:35669 - 27265 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000044511s
	[INFO] 10.244.0.9:35669 - 20611 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108647s
	[INFO] 10.244.0.9:50158 - 39932 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000077592s
	[INFO] 10.244.0.9:50158 - 32255 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105856s
	[INFO] 10.244.0.9:36744 - 51260 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041602s
	[INFO] 10.244.0.9:36744 - 54078 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000130013s
	[INFO] 10.244.0.9:59204 - 57538 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000135989s
	[INFO] 10.244.0.9:59204 - 53184 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000274699s
	[INFO] 10.244.0.9:45548 - 60535 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140435s
	[INFO] 10.244.0.9:45548 - 7017 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000059222s
	[INFO] 10.244.0.22:37711 - 50380 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000337838s
	[INFO] 10.244.0.22:52282 - 11429 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00013157s
	[INFO] 10.244.0.22:39332 - 26800 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142791s
	[INFO] 10.244.0.22:37305 - 59557 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000077223s
	[INFO] 10.244.0.22:33458 - 42121 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000089945s
	[INFO] 10.244.0.22:51049 - 16331 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000057126s
	[INFO] 10.244.0.22:57041 - 14215 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001390577s
	[INFO] 10.244.0.22:33170 - 59818 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001768229s
	[INFO] 10.244.0.25:41743 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00040717s
	[INFO] 10.244.0.25:40221 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148265s
	
	
	==> describe nodes <==
	Name:               addons-825010
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-825010
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=addons-825010
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T11_21_57_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-825010
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:21:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-825010
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:26:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:24:30 +0000   Mon, 08 Apr 2024 11:21:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:24:30 +0000   Mon, 08 Apr 2024 11:21:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:24:30 +0000   Mon, 08 Apr 2024 11:21:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:24:30 +0000   Mon, 08 Apr 2024 11:21:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    addons-825010
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc30bcc31be14839aa477838642740a9
	  System UUID:                bc30bcc3-1be1-4839-aa47-7838642740a9
	  Boot ID:                    89cf21ed-bd50-4b5e-868c-d9ea3c125778
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-9bc5n           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-7d69788767-7jx4w                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  headlamp                    headlamp-5b77dbd7c4-qvbrm                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-76f75df574-qpbgn                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m35s
	  kube-system                 etcd-addons-825010                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-apiserver-addons-825010               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-controller-manager-addons-825010      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-proxy-5cw2t                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-scheduler-addons-825010               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  local-path-storage          local-path-provisioner-78b46b4d5c-xks79    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-grh7b             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m32s  kube-proxy       
	  Normal  Starting                 4m48s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m48s  kubelet          Node addons-825010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s  kubelet          Node addons-825010 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s  kubelet          Node addons-825010 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m48s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m48s  kubelet          Node addons-825010 status is now: NodeReady
	  Normal  RegisteredNode           4m37s  node-controller  Node addons-825010 event: Registered Node addons-825010 in Controller
	
	
	==> dmesg <==
	[Apr 8 11:22] systemd-fstab-generator[1483]: Ignoring "noauto" option for root device
	[  +0.161643] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.095201] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.030429] kauditd_printk_skb: 144 callbacks suppressed
	[  +5.716405] kauditd_printk_skb: 77 callbacks suppressed
	[ +10.998982] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.186381] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.982840] kauditd_printk_skb: 4 callbacks suppressed
	[Apr 8 11:23] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.976364] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.014172] kauditd_printk_skb: 60 callbacks suppressed
	[  +8.454442] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.254890] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.879541] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.021084] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.004850] kauditd_printk_skb: 32 callbacks suppressed
	[Apr 8 11:24] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.708590] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.877561] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.358835] kauditd_printk_skb: 38 callbacks suppressed
	[ +17.036537] kauditd_printk_skb: 7 callbacks suppressed
	[Apr 8 11:25] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.649213] kauditd_printk_skb: 33 callbacks suppressed
	[Apr 8 11:26] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.089484] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [1e38e40c07dd63569d918af822cbee1f79b12b6de3243115af2e5ecda6c653da] <==
	{"level":"info","ts":"2024-04-08T11:23:35.349684Z","caller":"traceutil/trace.go:171","msg":"trace[256440634] transaction","detail":"{read_only:false; response_revision:1103; number_of_response:1; }","duration":"402.591437ms","start":"2024-04-08T11:23:34.947031Z","end":"2024-04-08T11:23:35.349623Z","steps":["trace[256440634] 'process raft request'  (duration: 402.250898ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:23:35.353879Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T11:23:34.947001Z","time spent":"406.699756ms","remote":"127.0.0.1:44192","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6094,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-65496f9567-6mkg5\" mod_revision:693 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-65496f9567-6mkg5\" value_size:6016 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-controller-65496f9567-6mkg5\" > >"}
	{"level":"warn","ts":"2024-04-08T11:23:35.355973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.823324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-08T11:23:35.358099Z","caller":"traceutil/trace.go:171","msg":"trace[179881440] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1103; }","duration":"126.97552ms","start":"2024-04-08T11:23:35.23111Z","end":"2024-04-08T11:23:35.358085Z","steps":["trace[179881440] 'agreement among raft nodes before linearized reading'  (duration: 119.158164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:23:35.350147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.288845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85370"}
	{"level":"info","ts":"2024-04-08T11:23:35.358289Z","caller":"traceutil/trace.go:171","msg":"trace[481602152] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1103; }","duration":"397.452203ms","start":"2024-04-08T11:23:34.960828Z","end":"2024-04-08T11:23:35.358281Z","steps":["trace[481602152] 'agreement among raft nodes before linearized reading'  (duration: 389.176ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:23:35.358335Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T11:23:34.960772Z","time spent":"397.548849ms","remote":"127.0.0.1:44192","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85394,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-04-08T11:23:35.349968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.838373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14599"}
	{"level":"info","ts":"2024-04-08T11:23:35.358532Z","caller":"traceutil/trace.go:171","msg":"trace[1259684622] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1103; }","duration":"291.494191ms","start":"2024-04-08T11:23:35.067031Z","end":"2024-04-08T11:23:35.358526Z","steps":["trace[1259684622] 'agreement among raft nodes before linearized reading'  (duration: 282.714937ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T11:23:43.746462Z","caller":"traceutil/trace.go:171","msg":"trace[167591139] transaction","detail":"{read_only:false; response_revision:1155; number_of_response:1; }","duration":"129.120703ms","start":"2024-04-08T11:23:43.617302Z","end":"2024-04-08T11:23:43.746423Z","steps":["trace[167591139] 'process raft request'  (duration: 128.263816ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T11:24:04.160267Z","caller":"traceutil/trace.go:171","msg":"trace[826735163] linearizableReadLoop","detail":"{readStateIndex:1383; appliedIndex:1382; }","duration":"286.551945ms","start":"2024-04-08T11:24:03.873647Z","end":"2024-04-08T11:24:04.160199Z","steps":["trace[826735163] 'read index received'  (duration: 286.422938ms)","trace[826735163] 'applied index is now lower than readState.Index'  (duration: 128.526µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-08T11:24:04.16074Z","caller":"traceutil/trace.go:171","msg":"trace[974179141] transaction","detail":"{read_only:false; response_revision:1337; number_of_response:1; }","duration":"298.041039ms","start":"2024-04-08T11:24:03.862682Z","end":"2024-04-08T11:24:04.160723Z","steps":["trace[974179141] 'process raft request'  (duration: 297.431665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:24:04.165014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.117991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-04-08T11:24:04.166043Z","caller":"traceutil/trace.go:171","msg":"trace[389431304] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1337; }","duration":"292.391634ms","start":"2024-04-08T11:24:03.873636Z","end":"2024-04-08T11:24:04.166028Z","steps":["trace[389431304] 'agreement among raft nodes before linearized reading'  (duration: 287.01557ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T11:24:06.076964Z","caller":"traceutil/trace.go:171","msg":"trace[779750421] linearizableReadLoop","detail":"{readStateIndex:1388; appliedIndex:1387; }","duration":"348.435262ms","start":"2024-04-08T11:24:05.728515Z","end":"2024-04-08T11:24:06.07695Z","steps":["trace[779750421] 'read index received'  (duration: 348.167378ms)","trace[779750421] 'applied index is now lower than readState.Index'  (duration: 267.328µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-08T11:24:06.077058Z","caller":"traceutil/trace.go:171","msg":"trace[1638798047] transaction","detail":"{read_only:false; response_revision:1342; number_of_response:1; }","duration":"499.400622ms","start":"2024-04-08T11:24:05.57765Z","end":"2024-04-08T11:24:06.077051Z","steps":["trace[1638798047] 'process raft request'  (duration: 499.089778ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:24:06.077166Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T11:24:05.577634Z","time spent":"499.449366ms","remote":"127.0.0.1:44276","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1268 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-04-08T11:24:06.077248Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.788277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-04-08T11:24:06.077283Z","caller":"traceutil/trace.go:171","msg":"trace[1594819156] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1342; }","duration":"309.853809ms","start":"2024-04-08T11:24:05.767419Z","end":"2024-04-08T11:24:06.077273Z","steps":["trace[1594819156] 'agreement among raft nodes before linearized reading'  (duration: 309.755852ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:24:06.07732Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T11:24:05.767405Z","time spent":"309.909052ms","remote":"127.0.0.1:44164","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":846,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc\" "}
	{"level":"warn","ts":"2024-04-08T11:24:06.077393Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.875409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.221\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-08T11:24:06.077419Z","caller":"traceutil/trace.go:171","msg":"trace[423851176] range","detail":"{range_begin:/registry/masterleases/192.168.39.221; range_end:; response_count:1; response_revision:1342; }","duration":"348.895154ms","start":"2024-04-08T11:24:05.728511Z","end":"2024-04-08T11:24:06.077406Z","steps":["trace[423851176] 'agreement among raft nodes before linearized reading'  (duration: 348.831137ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:24:06.077433Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T11:24:05.728478Z","time spent":"348.952763ms","remote":"127.0.0.1:44018","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":159,"request content":"key:\"/registry/masterleases/192.168.39.221\" "}
	{"level":"warn","ts":"2024-04-08T11:24:16.827195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.166414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-08T11:24:16.827326Z","caller":"traceutil/trace.go:171","msg":"trace[420334211] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1454; }","duration":"132.424023ms","start":"2024-04-08T11:24:16.694888Z","end":"2024-04-08T11:24:16.827312Z","steps":["trace[420334211] 'range keys from in-memory index tree'  (duration: 132.116575ms)"],"step_count":1}
	
	
	==> gcp-auth [a6afb955413041bf758fec26bdaab6f08a0d13aeabdc383f8b2b35dc935ddfb6] <==
	2024/04/08 11:23:39 GCP Auth Webhook started!
	2024/04/08 11:23:44 Ready to marshal response ...
	2024/04/08 11:23:44 Ready to write response ...
	2024/04/08 11:23:44 Ready to marshal response ...
	2024/04/08 11:23:44 Ready to write response ...
	2024/04/08 11:23:55 Ready to marshal response ...
	2024/04/08 11:23:55 Ready to write response ...
	2024/04/08 11:23:57 Ready to marshal response ...
	2024/04/08 11:23:57 Ready to write response ...
	2024/04/08 11:23:59 Ready to marshal response ...
	2024/04/08 11:23:59 Ready to write response ...
	2024/04/08 11:23:59 Ready to marshal response ...
	2024/04/08 11:23:59 Ready to write response ...
	2024/04/08 11:23:59 Ready to marshal response ...
	2024/04/08 11:23:59 Ready to write response ...
	2024/04/08 11:24:09 Ready to marshal response ...
	2024/04/08 11:24:09 Ready to write response ...
	2024/04/08 11:24:12 Ready to marshal response ...
	2024/04/08 11:24:12 Ready to write response ...
	2024/04/08 11:24:24 Ready to marshal response ...
	2024/04/08 11:24:24 Ready to write response ...
	2024/04/08 11:24:54 Ready to marshal response ...
	2024/04/08 11:24:54 Ready to write response ...
	2024/04/08 11:26:34 Ready to marshal response ...
	2024/04/08 11:26:34 Ready to write response ...
	
	
	==> kernel <==
	 11:26:45 up 5 min,  0 users,  load average: 0.63, 1.16, 0.62
	Linux addons-825010 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [baed11d40e558b51e4b656a6e7f689097634b02958c77a858e4bbb27defa799d] <==
	E0408 11:22:56.984757       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0408 11:22:56.988276       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0408 11:23:59.078986       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.32.99"}
	I0408 11:24:06.078658       1 trace.go:236] Trace[1464660521]: "Update" accept:application/json, */*,audit-id:bb8069e2-e34a-420e-921d-9b4950e77934,client:10.244.0.21,api-group:coordination.k8s.io,api-version:v1,name:ingress-nginx-leader,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/ingress-nginx/leases/ingress-nginx-leader,user-agent:nginx-ingress-controller/v1.10.0 (linux/amd64) ingress-nginx/71f78d49f0a496c31d4c19f095469f3f23900f8a,verb:PUT (08-Apr-2024 11:24:05.576) (total time: 502ms):
	Trace[1464660521]: ["GuaranteedUpdate etcd3" audit-id:bb8069e2-e34a-420e-921d-9b4950e77934,key:/leases/ingress-nginx/ingress-nginx-leader,type:*coordination.Lease,resource:leases.coordination.k8s.io 502ms (11:24:05.576)
	Trace[1464660521]:  ---"Txn call completed" 501ms (11:24:06.078)]
	Trace[1464660521]: [502.317407ms] [502.317407ms] END
	I0408 11:24:12.491566       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0408 11:24:12.734767       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.186.44"}
	I0408 11:24:13.945657       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0408 11:24:15.007946       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0408 11:24:33.979047       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0408 11:24:57.975138       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0408 11:25:11.374350       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 11:25:11.374922       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 11:25:11.423058       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 11:25:11.423119       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 11:25:11.443847       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 11:25:11.443908       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0408 11:25:11.456723       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0408 11:25:11.456833       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0408 11:25:12.424454       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0408 11:25:12.456866       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0408 11:25:12.488900       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0408 11:26:34.973907       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.121.126"}
	
	
	==> kube-controller-manager [7236bc85f77885f4f6c51735e3bd8354dccd9df6134d01eb4bc1a5eae1a1094c] <==
	W0408 11:25:47.560453       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 11:25:47.560506       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0408 11:25:50.146183       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 11:25:50.146350       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0408 11:25:53.405061       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 11:25:53.405181       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0408 11:26:16.503947       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 11:26:16.504080       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0408 11:26:20.490859       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 11:26:20.490980       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0408 11:26:21.270224       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 11:26:21.270361       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0408 11:26:34.715196       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0408 11:26:34.761033       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-9bc5n"
	I0408 11:26:34.768920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.232022ms"
	I0408 11:26:34.799649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="30.653003ms"
	I0408 11:26:34.819146       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.407551ms"
	I0408 11:26:34.819264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.548µs"
	I0408 11:26:37.086611       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0408 11:26:37.094389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="8.952µs"
	I0408 11:26:37.107219       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0408 11:26:39.361377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.878004ms"
	I0408 11:26:39.361476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.248µs"
	W0408 11:26:45.304569       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0408 11:26:45.304621       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [ae48f9611ea67e7429c3505a06d0c878caffd67bc5b605ca859d784637c3b75d] <==
	I0408 11:22:12.691543       1 server_others.go:72] "Using iptables proxy"
	I0408 11:22:12.706057       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.221"]
	I0408 11:22:12.808046       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 11:22:12.808065       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 11:22:12.808077       1 server_others.go:168] "Using iptables Proxier"
	I0408 11:22:12.811146       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 11:22:12.811319       1 server.go:865] "Version info" version="v1.29.3"
	I0408 11:22:12.811329       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:22:12.812653       1 config.go:188] "Starting service config controller"
	I0408 11:22:12.812662       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 11:22:12.812675       1 config.go:97] "Starting endpoint slice config controller"
	I0408 11:22:12.812678       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 11:22:12.817390       1 config.go:315] "Starting node config controller"
	I0408 11:22:12.817400       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 11:22:12.914103       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 11:22:12.914140       1 shared_informer.go:318] Caches are synced for service config
	I0408 11:22:12.919705       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [98e0c6fe2314977525a7f7fa03a5fa44f3ba7b8d626acffb94e2e838d2a5702e] <==
	W0408 11:21:53.964004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 11:21:53.964577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 11:21:54.983251       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 11:21:54.984070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 11:21:55.026998       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 11:21:55.027121       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 11:21:55.048428       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 11:21:55.048480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 11:21:55.050024       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 11:21:55.050070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 11:21:55.071143       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 11:21:55.071193       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0408 11:21:55.114949       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 11:21:55.115045       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 11:21:55.208544       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 11:21:55.208597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 11:21:55.213557       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:21:55.213603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 11:21:55.238923       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 11:21:55.238979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 11:21:55.273254       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 11:21:55.274131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 11:21:55.306948       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 11:21:55.307071       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0408 11:21:57.151355       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 11:26:34 addons-825010 kubelet[1273]: I0408 11:26:34.775328    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="6158ed88-c8bc-4ae5-9f1d-ee20cf23b683" containerName="csi-external-health-monitor-controller"
	Apr 08 11:26:34 addons-825010 kubelet[1273]: I0408 11:26:34.775361    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6397202-85e8-4b64-8b85-ebffa1c56287" containerName="volume-snapshot-controller"
	Apr 08 11:26:34 addons-825010 kubelet[1273]: I0408 11:26:34.775392    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="6158ed88-c8bc-4ae5-9f1d-ee20cf23b683" containerName="csi-snapshotter"
	Apr 08 11:26:34 addons-825010 kubelet[1273]: I0408 11:26:34.775436    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="6158ed88-c8bc-4ae5-9f1d-ee20cf23b683" containerName="csi-provisioner"
	Apr 08 11:26:34 addons-825010 kubelet[1273]: I0408 11:26:34.775469    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="6158ed88-c8bc-4ae5-9f1d-ee20cf23b683" containerName="hostpath"
	Apr 08 11:26:34 addons-825010 kubelet[1273]: I0408 11:26:34.877646    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3f97160e-f04d-4a21-9e9a-a8b73c5eb3c9-gcp-creds\") pod \"hello-world-app-5d77478584-9bc5n\" (UID: \"3f97160e-f04d-4a21-9e9a-a8b73c5eb3c9\") " pod="default/hello-world-app-5d77478584-9bc5n"
	Apr 08 11:26:34 addons-825010 kubelet[1273]: I0408 11:26:34.877724    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmqh\" (UniqueName: \"kubernetes.io/projected/3f97160e-f04d-4a21-9e9a-a8b73c5eb3c9-kube-api-access-7pmqh\") pod \"hello-world-app-5d77478584-9bc5n\" (UID: \"3f97160e-f04d-4a21-9e9a-a8b73c5eb3c9\") " pod="default/hello-world-app-5d77478584-9bc5n"
	Apr 08 11:26:36 addons-825010 kubelet[1273]: I0408 11:26:36.086666    1273 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c228f\" (UniqueName: \"kubernetes.io/projected/8f7ec1ca-982e-4e81-8c08-032b0284cbe1-kube-api-access-c228f\") pod \"8f7ec1ca-982e-4e81-8c08-032b0284cbe1\" (UID: \"8f7ec1ca-982e-4e81-8c08-032b0284cbe1\") "
	Apr 08 11:26:36 addons-825010 kubelet[1273]: I0408 11:26:36.091753    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7ec1ca-982e-4e81-8c08-032b0284cbe1-kube-api-access-c228f" (OuterVolumeSpecName: "kube-api-access-c228f") pod "8f7ec1ca-982e-4e81-8c08-032b0284cbe1" (UID: "8f7ec1ca-982e-4e81-8c08-032b0284cbe1"). InnerVolumeSpecName "kube-api-access-c228f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 11:26:36 addons-825010 kubelet[1273]: I0408 11:26:36.166011    1273 scope.go:117] "RemoveContainer" containerID="64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf"
	Apr 08 11:26:36 addons-825010 kubelet[1273]: I0408 11:26:36.188318    1273 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c228f\" (UniqueName: \"kubernetes.io/projected/8f7ec1ca-982e-4e81-8c08-032b0284cbe1-kube-api-access-c228f\") on node \"addons-825010\" DevicePath \"\""
	Apr 08 11:26:36 addons-825010 kubelet[1273]: I0408 11:26:36.211766    1273 scope.go:117] "RemoveContainer" containerID="64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf"
	Apr 08 11:26:36 addons-825010 kubelet[1273]: E0408 11:26:36.212875    1273 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf\": container with ID starting with 64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf not found: ID does not exist" containerID="64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf"
	Apr 08 11:26:36 addons-825010 kubelet[1273]: I0408 11:26:36.213092    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf"} err="failed to get container status \"64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf\": rpc error: code = NotFound desc = could not find container \"64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf\": container with ID starting with 64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf not found: ID does not exist"
	Apr 08 11:26:37 addons-825010 kubelet[1273]: I0408 11:26:37.569361    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7ec1ca-982e-4e81-8c08-032b0284cbe1" path="/var/lib/kubelet/pods/8f7ec1ca-982e-4e81-8c08-032b0284cbe1/volumes"
	Apr 08 11:26:37 addons-825010 kubelet[1273]: I0408 11:26:37.570150    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdf1c531-18c0-425a-8737-b4f7ca7f0bc3" path="/var/lib/kubelet/pods/cdf1c531-18c0-425a-8737-b4f7ca7f0bc3/volumes"
	Apr 08 11:26:37 addons-825010 kubelet[1273]: I0408 11:26:37.570588    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea220305-3656-4d32-9a31-074f2eebb5a2" path="/var/lib/kubelet/pods/ea220305-3656-4d32-9a31-074f2eebb5a2/volumes"
	Apr 08 11:26:40 addons-825010 kubelet[1273]: I0408 11:26:40.423470    1273 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e-webhook-cert\") pod \"7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e\" (UID: \"7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e\") "
	Apr 08 11:26:40 addons-825010 kubelet[1273]: I0408 11:26:40.423523    1273 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdwct\" (UniqueName: \"kubernetes.io/projected/7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e-kube-api-access-bdwct\") pod \"7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e\" (UID: \"7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e\") "
	Apr 08 11:26:40 addons-825010 kubelet[1273]: I0408 11:26:40.426901    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e" (UID: "7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 08 11:26:40 addons-825010 kubelet[1273]: I0408 11:26:40.427953    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e-kube-api-access-bdwct" (OuterVolumeSpecName: "kube-api-access-bdwct") pod "7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e" (UID: "7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e"). InnerVolumeSpecName "kube-api-access-bdwct". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 08 11:26:40 addons-825010 kubelet[1273]: I0408 11:26:40.524122    1273 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e-webhook-cert\") on node \"addons-825010\" DevicePath \"\""
	Apr 08 11:26:40 addons-825010 kubelet[1273]: I0408 11:26:40.524166    1273 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bdwct\" (UniqueName: \"kubernetes.io/projected/7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e-kube-api-access-bdwct\") on node \"addons-825010\" DevicePath \"\""
	Apr 08 11:26:41 addons-825010 kubelet[1273]: I0408 11:26:41.384097    1273 scope.go:117] "RemoveContainer" containerID="f250c256785ece1c177b2cb507f5015d36d1f0bff069e2de9312847dc5cd8c60"
	Apr 08 11:26:41 addons-825010 kubelet[1273]: I0408 11:26:41.554535    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e" path="/var/lib/kubelet/pods/7a7ed8e4-4c6e-46a1-a3c9-83bf36cc1e1e/volumes"
	
	
	==> storage-provisioner [0e86982f1e796c6fb88da9f02fbc515f1b69b4bb14f74d9f0f380d85de01bc9b] <==
	I0408 11:22:17.677954       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 11:22:18.553176       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 11:22:18.553288       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 11:22:18.609736       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 11:22:18.610417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-825010_92f12e60-8aa6-42c7-aa45-387c4e1bd75c!
	I0408 11:22:18.610661       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ee330cc-6066-4145-b3d9-84f0a7836204", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-825010_92f12e60-8aa6-42c7-aa45-387c4e1bd75c became leader
	I0408 11:22:18.714524       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-825010_92f12e60-8aa6-42c7-aa45-387c4e1bd75c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-825010 -n addons-825010
helpers_test.go:261: (dbg) Run:  kubectl --context addons-825010 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.08s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-825010 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-825010 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [05a16385-ef40-4797-afe9-546c5426e9dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [05a16385-ef40-4797-afe9-546c5426e9dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [05a16385-ef40-4797-afe9-546c5426e9dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005105151s
addons_test.go:891: (dbg) Run:  kubectl --context addons-825010 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 ssh "cat /opt/local-path-provisioner/pvc-7d75c78d-eccc-423f-92dd-5653fcb66ade_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-825010 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-825010 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-825010 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (413.353728ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:23:57.532318  378016 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:23:57.532457  378016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:23:57.532469  378016 out.go:304] Setting ErrFile to fd 2...
	I0408 11:23:57.532474  378016 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:23:57.532696  378016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:23:57.533017  378016 mustload.go:65] Loading cluster: addons-825010
	I0408 11:23:57.533396  378016 config.go:182] Loaded profile config "addons-825010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:23:57.533424  378016 addons.go:597] checking whether the cluster is paused
	I0408 11:23:57.533528  378016 config.go:182] Loaded profile config "addons-825010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:23:57.533547  378016 host.go:66] Checking if "addons-825010" exists ...
	I0408 11:23:57.534012  378016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:23:57.534074  378016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:23:57.549104  378016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34141
	I0408 11:23:57.549751  378016 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:23:57.550363  378016 main.go:141] libmachine: Using API Version  1
	I0408 11:23:57.550398  378016 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:23:57.550869  378016 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:23:57.551152  378016 main.go:141] libmachine: (addons-825010) Calling .GetState
	I0408 11:23:57.552928  378016 main.go:141] libmachine: (addons-825010) Calling .DriverName
	I0408 11:23:57.553196  378016 ssh_runner.go:195] Run: systemctl --version
	I0408 11:23:57.553228  378016 main.go:141] libmachine: (addons-825010) Calling .GetSSHHostname
	I0408 11:23:57.555387  378016 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:23:57.555774  378016 main.go:141] libmachine: (addons-825010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:c8:e0", ip: ""} in network mk-addons-825010: {Iface:virbr1 ExpiryTime:2024-04-08 12:21:29 +0000 UTC Type:0 Mac:52:54:00:a9:c8:e0 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:addons-825010 Clientid:01:52:54:00:a9:c8:e0}
	I0408 11:23:57.555811  378016 main.go:141] libmachine: (addons-825010) DBG | domain addons-825010 has defined IP address 192.168.39.221 and MAC address 52:54:00:a9:c8:e0 in network mk-addons-825010
	I0408 11:23:57.555951  378016 main.go:141] libmachine: (addons-825010) Calling .GetSSHPort
	I0408 11:23:57.556136  378016 main.go:141] libmachine: (addons-825010) Calling .GetSSHKeyPath
	I0408 11:23:57.556304  378016 main.go:141] libmachine: (addons-825010) Calling .GetSSHUsername
	I0408 11:23:57.556430  378016 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/addons-825010/id_rsa Username:docker}
	I0408 11:23:57.673616  378016 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 11:23:57.673715  378016 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 11:23:57.784577  378016 cri.go:89] found id: "2e30627b02f4a0775a437857c53985abde315b4ac47178caff52f616e4534d5b"
	I0408 11:23:57.784609  378016 cri.go:89] found id: "b021e5c827ba2b33df1ffc0e2a0e9e5a9ae6ba2a4d49d869dd74e5e6a09eb55f"
	I0408 11:23:57.784616  378016 cri.go:89] found id: "02c8d8da693d91c8dde8189ae9907ec9f27d3c62bcd16d6a4c17648fa70f9a5f"
	I0408 11:23:57.784628  378016 cri.go:89] found id: "3b2a96d7fa4a9e262fc7268fb1d5e83aab0feb3614fd49878ada6a325bbafaaa"
	I0408 11:23:57.784632  378016 cri.go:89] found id: "72ae13e6ad2ccb43a0e87444e7aa3a6d44fae70c63f87b168cf092d866ade94c"
	I0408 11:23:57.784644  378016 cri.go:89] found id: "5d7cc186587956f995ad8c636386f0f9cc4cde8c564c4892099b664182bbbd9d"
	I0408 11:23:57.784652  378016 cri.go:89] found id: "0cbef616e3fb1215f0bd2aec5c2b8591be5f02c97e8df6798b98c0afdb00a0f0"
	I0408 11:23:57.784656  378016 cri.go:89] found id: "abf352338d3ad251dbf6bb5d232bf9eced9dd96d2b55cacf2778e54a20ae1dcb"
	I0408 11:23:57.784661  378016 cri.go:89] found id: "15f0e05f34cb0b36c0bbfd239acaeba9b32f8fa2e893db64370833d9142a3f9d"
	I0408 11:23:57.784678  378016 cri.go:89] found id: "2bf6ed3c500b9a6cda49e5121b04a573981d6770256dac27c3001309a7c68711"
	I0408 11:23:57.784686  378016 cri.go:89] found id: "e45d65f66093de3c2dd0a1b7fd8b4685a077719a39b5d6588db0e7ea198afa3b"
	I0408 11:23:57.784690  378016 cri.go:89] found id: "afacfc75664f82561cc0b16b7540d764753e399c8524e2f8fac9540d34421394"
	I0408 11:23:57.784698  378016 cri.go:89] found id: "bfe38f58813da7dccc3d0fdcc271f74a748a08808a15c4b8d37888888a583483"
	I0408 11:23:57.784703  378016 cri.go:89] found id: "d3544522f3c43d52e23957bfcce38824020155e0d617f8d9c0a6a44ed7c148a8"
	I0408 11:23:57.784713  378016 cri.go:89] found id: "64623bbd7e745705f010ff1978cbcaa1eb8fa4fb4796ef9fc8bec48c3364e8bf"
	I0408 11:23:57.784719  378016 cri.go:89] found id: "edebc43edae6d8a1645b2682aa56f492949bd837ac7d4c354f5422c436285390"
	I0408 11:23:57.784724  378016 cri.go:89] found id: "0e86982f1e796c6fb88da9f02fbc515f1b69b4bb14f74d9f0f380d85de01bc9b"
	I0408 11:23:57.784732  378016 cri.go:89] found id: "a73c399acbefb364c1871a416a6144741b1bb7a04019effebef2e961d74c8669"
	I0408 11:23:57.784736  378016 cri.go:89] found id: "ae48f9611ea67e7429c3505a06d0c878caffd67bc5b605ca859d784637c3b75d"
	I0408 11:23:57.784743  378016 cri.go:89] found id: "1e38e40c07dd63569d918af822cbee1f79b12b6de3243115af2e5ecda6c653da"
	I0408 11:23:57.784748  378016 cri.go:89] found id: "98e0c6fe2314977525a7f7fa03a5fa44f3ba7b8d626acffb94e2e838d2a5702e"
	I0408 11:23:57.784751  378016 cri.go:89] found id: "7236bc85f77885f4f6c51735e3bd8354dccd9df6134d01eb4bc1a5eae1a1094c"
	I0408 11:23:57.784756  378016 cri.go:89] found id: "baed11d40e558b51e4b656a6e7f689097634b02958c77a858e4bbb27defa799d"
	I0408 11:23:57.784759  378016 cri.go:89] found id: ""
	I0408 11:23:57.784837  378016 ssh_runner.go:195] Run: sudo runc list -f json
	I0408 11:23:57.878460  378016 main.go:141] libmachine: Making call to close driver server
	I0408 11:23:57.878485  378016 main.go:141] libmachine: (addons-825010) Calling .Close
	I0408 11:23:57.878816  378016 main.go:141] libmachine: (addons-825010) DBG | Closing plugin on server side
	I0408 11:23:57.878876  378016 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:23:57.878889  378016 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:23:57.881517  378016 out.go:177] 
	W0408 11:23:57.882898  378016 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-08T11:23:57Z" level=error msg="stat /run/runc/a9ec5f6831281483dbb1fce0ea882cc98c93a5db70bc07706fe18cb27af75b46: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-08T11:23:57Z" level=error msg="stat /run/runc/a9ec5f6831281483dbb1fce0ea882cc98c93a5db70bc07706fe18cb27af75b46: no such file or directory"
	
	W0408 11:23:57.882914  378016 out.go:239] * 
	* 
	W0408 11:23:57.885668  378016 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:23:57.887315  378016 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:922: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-825010 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (13.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-825010
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-825010: exit status 82 (2m0.507375809s)

                                                
                                                
-- stdout --
	* Stopping node "addons-825010"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-825010" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-825010
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-825010: exit status 11 (21.713157969s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-825010" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-825010
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-825010: exit status 11 (6.142840832s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-825010" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-825010
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-825010: exit status 11 (6.143804676s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-825010" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 node stop m02 -v=7 --alsologtostderr
E0408 11:39:12.227776  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:39:28.755891  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:40:50.677015  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.52150373s)

                                                
                                                
-- stdout --
	* Stopping node "ha-438604-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:39:10.473780  389962 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:39:10.473953  389962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:39:10.473965  389962 out.go:304] Setting ErrFile to fd 2...
	I0408 11:39:10.473969  389962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:39:10.474204  389962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:39:10.474510  389962 mustload.go:65] Loading cluster: ha-438604
	I0408 11:39:10.474937  389962 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:39:10.474958  389962 stop.go:39] StopHost: ha-438604-m02
	I0408 11:39:10.475426  389962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:39:10.475488  389962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:39:10.493946  389962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I0408 11:39:10.494543  389962 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:39:10.495144  389962 main.go:141] libmachine: Using API Version  1
	I0408 11:39:10.495171  389962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:39:10.495595  389962 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:39:10.498150  389962 out.go:177] * Stopping node "ha-438604-m02"  ...
	I0408 11:39:10.499552  389962 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0408 11:39:10.499596  389962 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:39:10.499952  389962 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0408 11:39:10.499997  389962 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:39:10.503627  389962 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:39:10.504035  389962 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:39:10.504078  389962 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:39:10.504211  389962 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:39:10.504422  389962 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:39:10.504632  389962 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:39:10.504839  389962 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:39:10.602542  389962 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0408 11:39:10.658271  389962 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0408 11:39:10.716899  389962 main.go:141] libmachine: Stopping "ha-438604-m02"...
	I0408 11:39:10.716974  389962 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:39:10.719314  389962 main.go:141] libmachine: (ha-438604-m02) Calling .Stop
	I0408 11:39:10.723388  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 0/120
	I0408 11:39:11.724927  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 1/120
	I0408 11:39:12.726873  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 2/120
	I0408 11:39:13.728314  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 3/120
	I0408 11:39:14.730415  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 4/120
	I0408 11:39:15.732111  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 5/120
	I0408 11:39:16.734136  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 6/120
	I0408 11:39:17.735567  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 7/120
	I0408 11:39:18.737154  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 8/120
	I0408 11:39:19.739085  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 9/120
	I0408 11:39:20.741199  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 10/120
	I0408 11:39:21.742870  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 11/120
	I0408 11:39:22.744338  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 12/120
	I0408 11:39:23.745867  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 13/120
	I0408 11:39:24.747350  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 14/120
	I0408 11:39:25.749557  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 15/120
	I0408 11:39:26.751214  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 16/120
	I0408 11:39:27.753198  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 17/120
	I0408 11:39:28.754663  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 18/120
	I0408 11:39:29.756269  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 19/120
	I0408 11:39:30.758763  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 20/120
	I0408 11:39:31.761193  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 21/120
	I0408 11:39:32.762404  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 22/120
	I0408 11:39:33.764202  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 23/120
	I0408 11:39:34.765712  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 24/120
	I0408 11:39:35.768052  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 25/120
	I0408 11:39:36.769747  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 26/120
	I0408 11:39:37.771637  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 27/120
	I0408 11:39:38.773070  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 28/120
	I0408 11:39:39.774428  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 29/120
	I0408 11:39:40.776765  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 30/120
	I0408 11:39:41.778356  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 31/120
	I0408 11:39:42.780406  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 32/120
	I0408 11:39:43.782418  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 33/120
	I0408 11:39:44.784074  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 34/120
	I0408 11:39:45.786316  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 35/120
	I0408 11:39:46.787578  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 36/120
	I0408 11:39:47.789199  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 37/120
	I0408 11:39:48.790511  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 38/120
	I0408 11:39:49.792174  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 39/120
	I0408 11:39:50.794158  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 40/120
	I0408 11:39:51.795728  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 41/120
	I0408 11:39:52.797167  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 42/120
	I0408 11:39:53.798707  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 43/120
	I0408 11:39:54.800523  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 44/120
	I0408 11:39:55.802330  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 45/120
	I0408 11:39:56.804263  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 46/120
	I0408 11:39:57.805721  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 47/120
	I0408 11:39:58.807299  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 48/120
	I0408 11:39:59.808955  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 49/120
	I0408 11:40:00.811239  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 50/120
	I0408 11:40:01.812851  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 51/120
	I0408 11:40:02.814329  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 52/120
	I0408 11:40:03.816015  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 53/120
	I0408 11:40:04.817662  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 54/120
	I0408 11:40:05.819532  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 55/120
	I0408 11:40:06.821156  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 56/120
	I0408 11:40:07.822846  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 57/120
	I0408 11:40:08.824949  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 58/120
	I0408 11:40:09.826489  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 59/120
	I0408 11:40:10.828954  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 60/120
	I0408 11:40:11.830332  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 61/120
	I0408 11:40:12.831765  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 62/120
	I0408 11:40:13.833571  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 63/120
	I0408 11:40:14.835227  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 64/120
	I0408 11:40:15.837547  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 65/120
	I0408 11:40:16.839307  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 66/120
	I0408 11:40:17.840855  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 67/120
	I0408 11:40:18.842139  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 68/120
	I0408 11:40:19.844219  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 69/120
	I0408 11:40:20.846331  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 70/120
	I0408 11:40:21.848725  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 71/120
	I0408 11:40:22.850257  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 72/120
	I0408 11:40:23.851784  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 73/120
	I0408 11:40:24.853590  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 74/120
	I0408 11:40:25.855137  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 75/120
	I0408 11:40:26.856811  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 76/120
	I0408 11:40:27.858140  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 77/120
	I0408 11:40:28.859492  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 78/120
	I0408 11:40:29.860907  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 79/120
	I0408 11:40:30.862324  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 80/120
	I0408 11:40:31.864074  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 81/120
	I0408 11:40:32.866273  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 82/120
	I0408 11:40:33.867781  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 83/120
	I0408 11:40:34.869922  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 84/120
	I0408 11:40:35.872153  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 85/120
	I0408 11:40:36.874449  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 86/120
	I0408 11:40:37.875891  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 87/120
	I0408 11:40:38.878141  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 88/120
	I0408 11:40:39.879810  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 89/120
	I0408 11:40:40.882068  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 90/120
	I0408 11:40:41.883479  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 91/120
	I0408 11:40:42.885107  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 92/120
	I0408 11:40:43.886623  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 93/120
	I0408 11:40:44.888479  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 94/120
	I0408 11:40:45.890095  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 95/120
	I0408 11:40:46.891601  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 96/120
	I0408 11:40:47.892909  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 97/120
	I0408 11:40:48.894435  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 98/120
	I0408 11:40:49.896063  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 99/120
	I0408 11:40:50.898283  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 100/120
	I0408 11:40:51.899853  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 101/120
	I0408 11:40:52.901248  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 102/120
	I0408 11:40:53.902773  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 103/120
	I0408 11:40:54.904585  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 104/120
	I0408 11:40:55.906682  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 105/120
	I0408 11:40:56.908222  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 106/120
	I0408 11:40:57.910548  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 107/120
	I0408 11:40:58.912203  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 108/120
	I0408 11:40:59.913912  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 109/120
	I0408 11:41:00.916343  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 110/120
	I0408 11:41:01.918274  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 111/120
	I0408 11:41:02.920405  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 112/120
	I0408 11:41:03.922346  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 113/120
	I0408 11:41:04.924080  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 114/120
	I0408 11:41:05.926256  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 115/120
	I0408 11:41:06.927606  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 116/120
	I0408 11:41:07.929103  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 117/120
	I0408 11:41:08.930754  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 118/120
	I0408 11:41:09.932110  389962 main.go:141] libmachine: (ha-438604-m02) Waiting for machine to stop 119/120
	I0408 11:41:10.932914  389962 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0408 11:41:10.933136  389962 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-438604 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 3 (19.133775964s)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:41:10.998421  390378 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:41:10.998561  390378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:10.998594  390378 out.go:304] Setting ErrFile to fd 2...
	I0408 11:41:10.998601  390378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:10.998811  390378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:41:10.999031  390378 out.go:298] Setting JSON to false
	I0408 11:41:10.999065  390378 mustload.go:65] Loading cluster: ha-438604
	I0408 11:41:10.999176  390378 notify.go:220] Checking for updates...
	I0408 11:41:10.999520  390378 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:41:10.999539  390378 status.go:255] checking status of ha-438604 ...
	I0408 11:41:10.999979  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:11.000065  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:11.018277  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39393
	I0408 11:41:11.018788  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:11.019464  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:11.019526  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:11.019949  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:11.020204  390378 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:41:11.021834  390378 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:41:11.021855  390378 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:11.022287  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:11.022336  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:11.039346  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35299
	I0408 11:41:11.039829  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:11.040593  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:11.040627  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:11.041065  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:11.041381  390378 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:41:11.044386  390378 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:11.044910  390378 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:11.044948  390378 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:11.045059  390378 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:11.045393  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:11.045453  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:11.060492  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0408 11:41:11.061004  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:11.061523  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:11.061546  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:11.061857  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:11.062077  390378 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:41:11.062262  390378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:11.062294  390378 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:41:11.065476  390378 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:11.066000  390378 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:11.066031  390378 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:11.066293  390378 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:41:11.066491  390378 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:41:11.066738  390378 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:41:11.066955  390378 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:41:11.161375  390378 ssh_runner.go:195] Run: systemctl --version
	I0408 11:41:11.169196  390378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:11.189306  390378 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:11.189351  390378 api_server.go:166] Checking apiserver status ...
	I0408 11:41:11.189391  390378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:11.206574  390378 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:41:11.218261  390378 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:11.218337  390378 ssh_runner.go:195] Run: ls
	I0408 11:41:11.223528  390378 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:11.229876  390378 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:11.229904  390378 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:41:11.229915  390378 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:11.229932  390378 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:41:11.230228  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:11.230264  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:11.245293  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0408 11:41:11.245719  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:11.246247  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:11.246279  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:11.246646  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:11.246932  390378 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:41:11.248769  390378 status.go:330] ha-438604-m02 host status = "Running" (err=<nil>)
	I0408 11:41:11.248791  390378 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:11.249108  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:11.249147  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:11.264543  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43259
	I0408 11:41:11.264985  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:11.265475  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:11.265506  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:11.265936  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:11.266160  390378 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:41:11.269270  390378 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:11.269721  390378 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:11.269750  390378 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:11.269879  390378 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:11.270198  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:11.270235  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:11.285340  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
	I0408 11:41:11.285805  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:11.286336  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:11.286362  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:11.286699  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:11.286874  390378 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:41:11.287092  390378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:11.287115  390378 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:41:11.289859  390378 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:11.290300  390378 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:11.290333  390378 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:11.290455  390378 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:41:11.290626  390378 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:41:11.290830  390378 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:41:11.290965  390378 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	W0408 11:41:29.684044  390378 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:41:29.684144  390378 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	E0408 11:41:29.684162  390378 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:29.684169  390378 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0408 11:41:29.684191  390378 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:29.684204  390378 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:41:29.684614  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:29.684690  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:29.701150  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I0408 11:41:29.701778  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:29.702385  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:29.702411  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:29.702779  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:29.702971  390378 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:41:29.704613  390378 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:41:29.704635  390378 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:29.704927  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:29.704972  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:29.720350  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0408 11:41:29.720918  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:29.721539  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:29.721577  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:29.721940  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:29.722183  390378 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:41:29.725709  390378 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:29.726166  390378 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:29.726194  390378 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:29.726544  390378 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:29.727060  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:29.727118  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:29.742890  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I0408 11:41:29.743349  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:29.743857  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:29.743883  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:29.744237  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:29.744446  390378 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:41:29.744648  390378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:29.744680  390378 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:41:29.747660  390378 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:29.748341  390378 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:29.748374  390378 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:29.748540  390378 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:41:29.748743  390378 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:41:29.748915  390378 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:41:29.749070  390378 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:41:29.838334  390378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:29.857622  390378 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:29.857656  390378 api_server.go:166] Checking apiserver status ...
	I0408 11:41:29.857701  390378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:29.878477  390378 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:41:29.888230  390378 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:29.888289  390378 ssh_runner.go:195] Run: ls
	I0408 11:41:29.892953  390378 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:29.899557  390378 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:29.899600  390378 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:41:29.899614  390378 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:29.899638  390378 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:41:29.899968  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:29.900024  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:29.915369  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0408 11:41:29.915930  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:29.916421  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:29.916449  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:29.916813  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:29.917040  390378 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:41:29.918761  390378 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:41:29.918782  390378 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:29.919065  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:29.919101  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:29.936228  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
	I0408 11:41:29.936679  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:29.937167  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:29.937192  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:29.937614  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:29.937826  390378 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:41:29.940887  390378 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:29.941341  390378 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:29.941374  390378 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:29.941524  390378 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:29.941851  390378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:29.941891  390378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:29.957306  390378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
	I0408 11:41:29.957772  390378 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:29.958233  390378 main.go:141] libmachine: Using API Version  1
	I0408 11:41:29.958255  390378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:29.958621  390378 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:29.958839  390378 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:41:29.959074  390378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:29.959098  390378 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:41:29.962067  390378 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:29.962513  390378 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:29.962541  390378 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:29.962626  390378 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:41:29.962801  390378 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:41:29.962965  390378 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:41:29.963115  390378 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:41:30.049536  390378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:30.066597  390378 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-438604 -n ha-438604
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-438604 logs -n 25: (1.513515926s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604-m03.txt |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604:/home/docker/cp-test_ha-438604-m03_ha-438604.txt                       |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604 sudo cat                                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604.txt                                 |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m02:/home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m02 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04:/home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m04 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp testdata/cp-test.txt                                                | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604:/home/docker/cp-test_ha-438604-m04_ha-438604.txt                       |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604 sudo cat                                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604.txt                                 |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m02:/home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m02 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03:/home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m03 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-438604 node stop m02 -v=7                                                     | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:34:02
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:34:02.066668  385781 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:34:02.066787  385781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:34:02.066794  385781 out.go:304] Setting ErrFile to fd 2...
	I0408 11:34:02.066800  385781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:34:02.067043  385781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:34:02.067747  385781 out.go:298] Setting JSON to false
	I0408 11:34:02.068775  385781 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4585,"bootTime":1712571457,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:34:02.068856  385781 start.go:139] virtualization: kvm guest
	I0408 11:34:02.071565  385781 out.go:177] * [ha-438604] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:34:02.073145  385781 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:34:02.073101  385781 notify.go:220] Checking for updates...
	I0408 11:34:02.074690  385781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:34:02.076361  385781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:34:02.077807  385781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:34:02.079178  385781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:34:02.080661  385781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:34:02.082398  385781 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:34:02.119763  385781 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 11:34:02.121146  385781 start.go:297] selected driver: kvm2
	I0408 11:34:02.121161  385781 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:34:02.121173  385781 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:34:02.121906  385781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:34:02.121981  385781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:34:02.137800  385781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:34:02.137864  385781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:34:02.138102  385781 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:34:02.138169  385781 cni.go:84] Creating CNI manager for ""
	I0408 11:34:02.138189  385781 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0408 11:34:02.138194  385781 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 11:34:02.138248  385781 start.go:340] cluster config:
	{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0408 11:34:02.138345  385781 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:34:02.140305  385781 out.go:177] * Starting "ha-438604" primary control-plane node in "ha-438604" cluster
	I0408 11:34:02.141699  385781 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:34:02.141751  385781 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 11:34:02.141759  385781 cache.go:56] Caching tarball of preloaded images
	I0408 11:34:02.141844  385781 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:34:02.141854  385781 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:34:02.142160  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:34:02.142181  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json: {Name:mk0dff9aa3ef342d215af92fdd6656ec72244fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:02.142319  385781 start.go:360] acquireMachinesLock for ha-438604: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:34:02.142347  385781 start.go:364] duration metric: took 14.556µs to acquireMachinesLock for "ha-438604"
	I0408 11:34:02.142364  385781 start.go:93] Provisioning new machine with config: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:34:02.142413  385781 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 11:34:02.145227  385781 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:34:02.145437  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:34:02.145487  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:34:02.160659  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0408 11:34:02.161093  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:34:02.161657  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:34:02.161685  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:34:02.162140  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:34:02.162462  385781 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:34:02.162651  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:02.163159  385781 start.go:159] libmachine.API.Create for "ha-438604" (driver="kvm2")
	I0408 11:34:02.163198  385781 client.go:168] LocalClient.Create starting
	I0408 11:34:02.163239  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 11:34:02.163282  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:34:02.163301  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:34:02.163420  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 11:34:02.163445  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:34:02.163464  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:34:02.163495  385781 main.go:141] libmachine: Running pre-create checks...
	I0408 11:34:02.163510  385781 main.go:141] libmachine: (ha-438604) Calling .PreCreateCheck
	I0408 11:34:02.164642  385781 main.go:141] libmachine: (ha-438604) Calling .GetConfigRaw
	I0408 11:34:02.165113  385781 main.go:141] libmachine: Creating machine...
	I0408 11:34:02.165130  385781 main.go:141] libmachine: (ha-438604) Calling .Create
	I0408 11:34:02.165280  385781 main.go:141] libmachine: (ha-438604) Creating KVM machine...
	I0408 11:34:02.166552  385781 main.go:141] libmachine: (ha-438604) DBG | found existing default KVM network
	I0408 11:34:02.167251  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.167108  385804 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0408 11:34:02.167275  385781 main.go:141] libmachine: (ha-438604) DBG | created network xml: 
	I0408 11:34:02.167298  385781 main.go:141] libmachine: (ha-438604) DBG | <network>
	I0408 11:34:02.167318  385781 main.go:141] libmachine: (ha-438604) DBG |   <name>mk-ha-438604</name>
	I0408 11:34:02.167329  385781 main.go:141] libmachine: (ha-438604) DBG |   <dns enable='no'/>
	I0408 11:34:02.167344  385781 main.go:141] libmachine: (ha-438604) DBG |   
	I0408 11:34:02.167357  385781 main.go:141] libmachine: (ha-438604) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 11:34:02.167364  385781 main.go:141] libmachine: (ha-438604) DBG |     <dhcp>
	I0408 11:34:02.167374  385781 main.go:141] libmachine: (ha-438604) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 11:34:02.167382  385781 main.go:141] libmachine: (ha-438604) DBG |     </dhcp>
	I0408 11:34:02.167392  385781 main.go:141] libmachine: (ha-438604) DBG |   </ip>
	I0408 11:34:02.167403  385781 main.go:141] libmachine: (ha-438604) DBG |   
	I0408 11:34:02.167412  385781 main.go:141] libmachine: (ha-438604) DBG | </network>
	I0408 11:34:02.167423  385781 main.go:141] libmachine: (ha-438604) DBG | 
	I0408 11:34:02.172990  385781 main.go:141] libmachine: (ha-438604) DBG | trying to create private KVM network mk-ha-438604 192.168.39.0/24...
	I0408 11:34:02.238721  385781 main.go:141] libmachine: (ha-438604) DBG | private KVM network mk-ha-438604 192.168.39.0/24 created
	I0408 11:34:02.238760  385781 main.go:141] libmachine: (ha-438604) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604 ...
	I0408 11:34:02.238774  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.238654  385804 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:34:02.238788  385781 main.go:141] libmachine: (ha-438604) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:34:02.238850  385781 main.go:141] libmachine: (ha-438604) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 11:34:02.501016  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.500853  385804 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa...
	I0408 11:34:02.714632  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.714471  385804 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/ha-438604.rawdisk...
	I0408 11:34:02.714659  385781 main.go:141] libmachine: (ha-438604) DBG | Writing magic tar header
	I0408 11:34:02.714671  385781 main.go:141] libmachine: (ha-438604) DBG | Writing SSH key tar header
	I0408 11:34:02.714686  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.714604  385804 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604 ...
	I0408 11:34:02.714707  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604
	I0408 11:34:02.714854  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604 (perms=drwx------)
	I0408 11:34:02.714903  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 11:34:02.714916  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 11:34:02.714934  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 11:34:02.714946  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 11:34:02.714957  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 11:34:02.714970  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 11:34:02.715003  385781 main.go:141] libmachine: (ha-438604) Creating domain...
	I0408 11:34:02.715021  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:34:02.715031  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 11:34:02.715043  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 11:34:02.715061  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins
	I0408 11:34:02.715071  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home
	I0408 11:34:02.715082  385781 main.go:141] libmachine: (ha-438604) DBG | Skipping /home - not owner
	I0408 11:34:02.716209  385781 main.go:141] libmachine: (ha-438604) define libvirt domain using xml: 
	I0408 11:34:02.716233  385781 main.go:141] libmachine: (ha-438604) <domain type='kvm'>
	I0408 11:34:02.716265  385781 main.go:141] libmachine: (ha-438604)   <name>ha-438604</name>
	I0408 11:34:02.716287  385781 main.go:141] libmachine: (ha-438604)   <memory unit='MiB'>2200</memory>
	I0408 11:34:02.716297  385781 main.go:141] libmachine: (ha-438604)   <vcpu>2</vcpu>
	I0408 11:34:02.716308  385781 main.go:141] libmachine: (ha-438604)   <features>
	I0408 11:34:02.716334  385781 main.go:141] libmachine: (ha-438604)     <acpi/>
	I0408 11:34:02.716358  385781 main.go:141] libmachine: (ha-438604)     <apic/>
	I0408 11:34:02.716365  385781 main.go:141] libmachine: (ha-438604)     <pae/>
	I0408 11:34:02.716376  385781 main.go:141] libmachine: (ha-438604)     
	I0408 11:34:02.716423  385781 main.go:141] libmachine: (ha-438604)   </features>
	I0408 11:34:02.716428  385781 main.go:141] libmachine: (ha-438604)   <cpu mode='host-passthrough'>
	I0408 11:34:02.716445  385781 main.go:141] libmachine: (ha-438604)   
	I0408 11:34:02.716459  385781 main.go:141] libmachine: (ha-438604)   </cpu>
	I0408 11:34:02.716475  385781 main.go:141] libmachine: (ha-438604)   <os>
	I0408 11:34:02.716502  385781 main.go:141] libmachine: (ha-438604)     <type>hvm</type>
	I0408 11:34:02.716511  385781 main.go:141] libmachine: (ha-438604)     <boot dev='cdrom'/>
	I0408 11:34:02.716517  385781 main.go:141] libmachine: (ha-438604)     <boot dev='hd'/>
	I0408 11:34:02.716527  385781 main.go:141] libmachine: (ha-438604)     <bootmenu enable='no'/>
	I0408 11:34:02.716534  385781 main.go:141] libmachine: (ha-438604)   </os>
	I0408 11:34:02.716547  385781 main.go:141] libmachine: (ha-438604)   <devices>
	I0408 11:34:02.716556  385781 main.go:141] libmachine: (ha-438604)     <disk type='file' device='cdrom'>
	I0408 11:34:02.716572  385781 main.go:141] libmachine: (ha-438604)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/boot2docker.iso'/>
	I0408 11:34:02.716582  385781 main.go:141] libmachine: (ha-438604)       <target dev='hdc' bus='scsi'/>
	I0408 11:34:02.716588  385781 main.go:141] libmachine: (ha-438604)       <readonly/>
	I0408 11:34:02.716595  385781 main.go:141] libmachine: (ha-438604)     </disk>
	I0408 11:34:02.716601  385781 main.go:141] libmachine: (ha-438604)     <disk type='file' device='disk'>
	I0408 11:34:02.716611  385781 main.go:141] libmachine: (ha-438604)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 11:34:02.716638  385781 main.go:141] libmachine: (ha-438604)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/ha-438604.rawdisk'/>
	I0408 11:34:02.716654  385781 main.go:141] libmachine: (ha-438604)       <target dev='hda' bus='virtio'/>
	I0408 11:34:02.716659  385781 main.go:141] libmachine: (ha-438604)     </disk>
	I0408 11:34:02.716664  385781 main.go:141] libmachine: (ha-438604)     <interface type='network'>
	I0408 11:34:02.716669  385781 main.go:141] libmachine: (ha-438604)       <source network='mk-ha-438604'/>
	I0408 11:34:02.716677  385781 main.go:141] libmachine: (ha-438604)       <model type='virtio'/>
	I0408 11:34:02.716682  385781 main.go:141] libmachine: (ha-438604)     </interface>
	I0408 11:34:02.716688  385781 main.go:141] libmachine: (ha-438604)     <interface type='network'>
	I0408 11:34:02.716694  385781 main.go:141] libmachine: (ha-438604)       <source network='default'/>
	I0408 11:34:02.716701  385781 main.go:141] libmachine: (ha-438604)       <model type='virtio'/>
	I0408 11:34:02.716706  385781 main.go:141] libmachine: (ha-438604)     </interface>
	I0408 11:34:02.716712  385781 main.go:141] libmachine: (ha-438604)     <serial type='pty'>
	I0408 11:34:02.716718  385781 main.go:141] libmachine: (ha-438604)       <target port='0'/>
	I0408 11:34:02.716727  385781 main.go:141] libmachine: (ha-438604)     </serial>
	I0408 11:34:02.716743  385781 main.go:141] libmachine: (ha-438604)     <console type='pty'>
	I0408 11:34:02.716763  385781 main.go:141] libmachine: (ha-438604)       <target type='serial' port='0'/>
	I0408 11:34:02.716783  385781 main.go:141] libmachine: (ha-438604)     </console>
	I0408 11:34:02.716793  385781 main.go:141] libmachine: (ha-438604)     <rng model='virtio'>
	I0408 11:34:02.716807  385781 main.go:141] libmachine: (ha-438604)       <backend model='random'>/dev/random</backend>
	I0408 11:34:02.716817  385781 main.go:141] libmachine: (ha-438604)     </rng>
	I0408 11:34:02.716828  385781 main.go:141] libmachine: (ha-438604)     
	I0408 11:34:02.716842  385781 main.go:141] libmachine: (ha-438604)     
	I0408 11:34:02.716855  385781 main.go:141] libmachine: (ha-438604)   </devices>
	I0408 11:34:02.716864  385781 main.go:141] libmachine: (ha-438604) </domain>
	I0408 11:34:02.716874  385781 main.go:141] libmachine: (ha-438604) 
	I0408 11:34:02.721501  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:27:b9:bb in network default
	I0408 11:34:02.722194  385781 main.go:141] libmachine: (ha-438604) Ensuring networks are active...
	I0408 11:34:02.722217  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:02.722897  385781 main.go:141] libmachine: (ha-438604) Ensuring network default is active
	I0408 11:34:02.723177  385781 main.go:141] libmachine: (ha-438604) Ensuring network mk-ha-438604 is active
	I0408 11:34:02.723799  385781 main.go:141] libmachine: (ha-438604) Getting domain xml...
	I0408 11:34:02.724605  385781 main.go:141] libmachine: (ha-438604) Creating domain...
	I0408 11:34:03.908787  385781 main.go:141] libmachine: (ha-438604) Waiting to get IP...
	I0408 11:34:03.909769  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:03.910172  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:03.910226  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:03.910159  385804 retry.go:31] will retry after 221.755655ms: waiting for machine to come up
	I0408 11:34:04.133792  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:04.134236  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:04.134269  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:04.134184  385804 retry.go:31] will retry after 322.264919ms: waiting for machine to come up
	I0408 11:34:04.457884  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:04.458279  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:04.458325  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:04.458266  385804 retry.go:31] will retry after 321.349466ms: waiting for machine to come up
	I0408 11:34:04.780692  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:04.781160  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:04.781191  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:04.781140  385804 retry.go:31] will retry after 497.855083ms: waiting for machine to come up
	I0408 11:34:05.281050  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:05.281620  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:05.281650  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:05.281557  385804 retry.go:31] will retry after 518.591769ms: waiting for machine to come up
	I0408 11:34:05.801844  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:05.802159  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:05.802192  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:05.802134  385804 retry.go:31] will retry after 931.498076ms: waiting for machine to come up
	I0408 11:34:06.735497  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:06.735980  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:06.736015  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:06.735911  385804 retry.go:31] will retry after 791.307745ms: waiting for machine to come up
	I0408 11:34:07.528758  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:07.529217  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:07.529246  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:07.529171  385804 retry.go:31] will retry after 1.221674233s: waiting for machine to come up
	I0408 11:34:08.752672  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:08.753212  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:08.753241  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:08.753149  385804 retry.go:31] will retry after 1.230439476s: waiting for machine to come up
	I0408 11:34:09.984915  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:09.985323  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:09.985352  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:09.985278  385804 retry.go:31] will retry after 2.06240866s: waiting for machine to come up
	I0408 11:34:12.050567  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:12.050969  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:12.051003  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:12.050927  385804 retry.go:31] will retry after 2.508679148s: waiting for machine to come up
	I0408 11:34:14.562492  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:14.562927  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:14.562958  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:14.562867  385804 retry.go:31] will retry after 3.244104264s: waiting for machine to come up
	I0408 11:34:17.808998  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:17.809378  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:17.809413  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:17.809318  385804 retry.go:31] will retry after 4.471776163s: waiting for machine to come up
	I0408 11:34:22.283484  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:22.283945  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:22.283974  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:22.283860  385804 retry.go:31] will retry after 5.2043868s: waiting for machine to come up
	I0408 11:34:27.490112  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.490616  385781 main.go:141] libmachine: (ha-438604) Found IP for machine: 192.168.39.99
	I0408 11:34:27.490660  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has current primary IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.490669  385781 main.go:141] libmachine: (ha-438604) Reserving static IP address...
	I0408 11:34:27.491212  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find host DHCP lease matching {name: "ha-438604", mac: "52:54:00:cc:8e:55", ip: "192.168.39.99"} in network mk-ha-438604
	I0408 11:34:27.565711  385781 main.go:141] libmachine: (ha-438604) DBG | Getting to WaitForSSH function...
	I0408 11:34:27.565742  385781 main.go:141] libmachine: (ha-438604) Reserved static IP address: 192.168.39.99
	I0408 11:34:27.565755  385781 main.go:141] libmachine: (ha-438604) Waiting for SSH to be available...
	I0408 11:34:27.568333  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.568715  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:27.568749  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.568954  385781 main.go:141] libmachine: (ha-438604) DBG | Using SSH client type: external
	I0408 11:34:27.568981  385781 main.go:141] libmachine: (ha-438604) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa (-rw-------)
	I0408 11:34:27.569022  385781 main.go:141] libmachine: (ha-438604) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 11:34:27.569037  385781 main.go:141] libmachine: (ha-438604) DBG | About to run SSH command:
	I0408 11:34:27.569051  385781 main.go:141] libmachine: (ha-438604) DBG | exit 0
	I0408 11:34:27.699841  385781 main.go:141] libmachine: (ha-438604) DBG | SSH cmd err, output: <nil>: 
	I0408 11:34:27.700076  385781 main.go:141] libmachine: (ha-438604) KVM machine creation complete!
	I0408 11:34:27.700405  385781 main.go:141] libmachine: (ha-438604) Calling .GetConfigRaw
	I0408 11:34:27.701078  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:27.701295  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:27.701514  385781 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 11:34:27.701540  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:34:27.702881  385781 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 11:34:27.702912  385781 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 11:34:27.702922  385781 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 11:34:27.702939  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:27.705333  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.705694  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:27.705747  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.705871  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:27.706086  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.706237  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.706409  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:27.706542  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:27.706805  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:27.706820  385781 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 11:34:27.823493  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:34:27.823520  385781 main.go:141] libmachine: Detecting the provisioner...
	I0408 11:34:27.823532  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:27.826476  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.826874  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:27.826909  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.827113  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:27.827370  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.827609  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.827776  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:27.827999  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:27.828195  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:27.828207  385781 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 11:34:27.945439  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 11:34:27.945540  385781 main.go:141] libmachine: found compatible host: buildroot
	I0408 11:34:27.945548  385781 main.go:141] libmachine: Provisioning with buildroot...
	I0408 11:34:27.945556  385781 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:34:27.945884  385781 buildroot.go:166] provisioning hostname "ha-438604"
	I0408 11:34:27.945925  385781 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:34:27.946183  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:27.949131  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.949519  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:27.949570  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.949637  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:27.949858  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.950020  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.950183  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:27.950330  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:27.950563  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:27.950578  385781 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-438604 && echo "ha-438604" | sudo tee /etc/hostname
	I0408 11:34:28.080893  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604
	
	I0408 11:34:28.080931  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.084090  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.084520  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.084564  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.084756  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:28.085024  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.085232  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.085420  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:28.085602  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:28.085827  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:28.085847  385781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-438604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-438604/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-438604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:34:28.210052  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:34:28.210093  385781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:34:28.210125  385781 buildroot.go:174] setting up certificates
	I0408 11:34:28.210140  385781 provision.go:84] configureAuth start
	I0408 11:34:28.210151  385781 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:34:28.210475  385781 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:34:28.212972  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.213319  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.213340  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.213549  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.215880  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.216321  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.216352  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.216464  385781 provision.go:143] copyHostCerts
	I0408 11:34:28.216501  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:34:28.216546  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 11:34:28.216570  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:34:28.216654  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:34:28.216795  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:34:28.216824  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 11:34:28.216833  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:34:28.216877  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:34:28.216972  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:34:28.216999  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 11:34:28.217016  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:34:28.217055  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:34:28.217140  385781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.ha-438604 san=[127.0.0.1 192.168.39.99 ha-438604 localhost minikube]
	I0408 11:34:28.485726  385781 provision.go:177] copyRemoteCerts
	I0408 11:34:28.485798  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:34:28.485831  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.488756  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.489078  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.489112  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.489244  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:28.489499  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.489693  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:28.489901  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:34:28.578890  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 11:34:28.579029  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:34:28.604442  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 11:34:28.604536  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0408 11:34:28.630287  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 11:34:28.630383  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 11:34:28.657271  385781 provision.go:87] duration metric: took 447.117011ms to configureAuth
	I0408 11:34:28.657307  385781 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:34:28.657478  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:34:28.657572  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.660193  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.660540  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.660570  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.660702  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:28.660919  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.661109  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.661223  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:28.661428  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:28.661601  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:28.661616  385781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:34:28.950417  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:34:28.950448  385781 main.go:141] libmachine: Checking connection to Docker...
	I0408 11:34:28.950472  385781 main.go:141] libmachine: (ha-438604) Calling .GetURL
	I0408 11:34:28.951731  385781 main.go:141] libmachine: (ha-438604) DBG | Using libvirt version 6000000
	I0408 11:34:28.954061  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.954343  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.954371  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.954566  385781 main.go:141] libmachine: Docker is up and running!
	I0408 11:34:28.954581  385781 main.go:141] libmachine: Reticulating splines...
	I0408 11:34:28.954587  385781 client.go:171] duration metric: took 26.791382418s to LocalClient.Create
	I0408 11:34:28.954607  385781 start.go:167] duration metric: took 26.791450949s to libmachine.API.Create "ha-438604"
	I0408 11:34:28.954616  385781 start.go:293] postStartSetup for "ha-438604" (driver="kvm2")
	I0408 11:34:28.954627  385781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:34:28.954644  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:28.954883  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:34:28.954907  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.957054  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.957381  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.957407  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.957548  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:28.957736  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.957911  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:28.958098  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:34:29.046380  385781 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:34:29.050754  385781 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:34:29.050781  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:34:29.050868  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:34:29.050983  385781 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 11:34:29.051000  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 11:34:29.051127  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 11:34:29.060928  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:34:29.086596  385781 start.go:296] duration metric: took 131.963029ms for postStartSetup
	I0408 11:34:29.086656  385781 main.go:141] libmachine: (ha-438604) Calling .GetConfigRaw
	I0408 11:34:29.087277  385781 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:34:29.090168  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.090524  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.090550  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.090888  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:34:29.091133  385781 start.go:128] duration metric: took 26.948707881s to createHost
	I0408 11:34:29.091165  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:29.093225  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.093582  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.093618  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.093707  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:29.093931  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:29.094076  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:29.094207  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:29.094334  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:29.094577  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:29.094598  385781 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:34:29.208766  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576069.185207871
	
	I0408 11:34:29.208800  385781 fix.go:216] guest clock: 1712576069.185207871
	I0408 11:34:29.208812  385781 fix.go:229] Guest: 2024-04-08 11:34:29.185207871 +0000 UTC Remote: 2024-04-08 11:34:29.091150036 +0000 UTC m=+27.074198880 (delta=94.057835ms)
	I0408 11:34:29.208845  385781 fix.go:200] guest clock delta is within tolerance: 94.057835ms
	I0408 11:34:29.208852  385781 start.go:83] releasing machines lock for "ha-438604", held for 27.066494886s
	I0408 11:34:29.208879  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:29.209176  385781 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:34:29.212055  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.212432  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.212468  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.212652  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:29.213176  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:29.213342  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:29.213435  385781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:34:29.213478  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:29.213601  385781 ssh_runner.go:195] Run: cat /version.json
	I0408 11:34:29.213628  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:29.215878  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.216170  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.216204  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.216224  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.216343  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:29.216532  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:29.216552  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.216581  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.216713  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:29.216733  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:29.216862  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:34:29.216915  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:29.217052  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:29.217172  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:34:29.297222  385781 ssh_runner.go:195] Run: systemctl --version
	I0408 11:34:29.334723  385781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:34:29.495371  385781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:34:29.501487  385781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:34:29.501582  385781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:34:29.519326  385781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 11:34:29.519359  385781 start.go:494] detecting cgroup driver to use...
	I0408 11:34:29.519440  385781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:34:29.535461  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:34:29.550170  385781 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:34:29.550244  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:34:29.564867  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:34:29.579761  385781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:34:29.700044  385781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:34:29.837416  385781 docker.go:233] disabling docker service ...
	I0408 11:34:29.837504  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:34:29.853404  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:34:29.867589  385781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:34:30.001145  385781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:34:30.133223  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:34:30.154686  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:34:30.175031  385781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:34:30.175099  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.186271  385781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:34:30.186343  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.197564  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.208654  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.219799  385781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:34:30.231167  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.242152  385781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.260537  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.272206  385781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:34:30.282884  385781 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 11:34:30.282948  385781 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 11:34:30.297985  385781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:34:30.307978  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:34:30.435831  385781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:34:30.585561  385781 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:34:30.585654  385781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:34:30.590971  385781 start.go:562] Will wait 60s for crictl version
	I0408 11:34:30.591057  385781 ssh_runner.go:195] Run: which crictl
	I0408 11:34:30.595229  385781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:34:30.635555  385781 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:34:30.635668  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:34:30.665322  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:34:30.698754  385781 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:34:30.700339  385781 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:34:30.703208  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:30.703583  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:30.703611  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:30.704027  385781 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:34:30.708371  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:34:30.722166  385781 kubeadm.go:877] updating cluster {Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 11:34:30.722272  385781 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:34:30.722321  385781 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:34:30.760155  385781 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 11:34:30.760232  385781 ssh_runner.go:195] Run: which lz4
	I0408 11:34:30.764524  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0408 11:34:30.764628  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 11:34:30.769065  385781 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 11:34:30.769097  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 11:34:32.282117  385781 crio.go:462] duration metric: took 1.517509219s to copy over tarball
	I0408 11:34:32.282195  385781 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 11:34:34.601298  385781 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.319070915s)
	I0408 11:34:34.601338  385781 crio.go:469] duration metric: took 2.319186776s to extract the tarball
	I0408 11:34:34.601360  385781 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 11:34:34.640264  385781 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:34:34.692273  385781 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 11:34:34.692300  385781 cache_images.go:84] Images are preloaded, skipping loading
	I0408 11:34:34.692309  385781 kubeadm.go:928] updating node { 192.168.39.99 8443 v1.29.3 crio true true} ...
	I0408 11:34:34.692463  385781 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-438604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:34:34.692541  385781 ssh_runner.go:195] Run: crio config
	I0408 11:34:34.747951  385781 cni.go:84] Creating CNI manager for ""
	I0408 11:34:34.747975  385781 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 11:34:34.747986  385781 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 11:34:34.748008  385781 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-438604 NodeName:ha-438604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 11:34:34.748174  385781 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-438604"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 11:34:34.748205  385781 kube-vip.go:111] generating kube-vip config ...
	I0408 11:34:34.748249  385781 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 11:34:34.766601  385781 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0408 11:34:34.766743  385781 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0408 11:34:34.766813  385781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:34:34.778696  385781 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 11:34:34.778797  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0408 11:34:34.789620  385781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0408 11:34:34.808244  385781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:34:34.827056  385781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0408 11:34:34.845314  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0408 11:34:34.863846  385781 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0408 11:34:34.868386  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:34:34.882352  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:34:35.026177  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:34:35.045651  385781 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604 for IP: 192.168.39.99
	I0408 11:34:35.045678  385781 certs.go:194] generating shared ca certs ...
	I0408 11:34:35.045722  385781 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.045914  385781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:34:35.045984  385781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:34:35.046001  385781 certs.go:256] generating profile certs ...
	I0408 11:34:35.046078  385781 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key
	I0408 11:34:35.046117  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt with IP's: []
	I0408 11:34:35.413373  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt ...
	I0408 11:34:35.413422  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt: {Name:mk3b7e649553e94d1cd8e4133ae9117a1d5de74d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.413656  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key ...
	I0408 11:34:35.413676  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key: {Name:mk319d1da2826da2f55614b44acfb24a5466deec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.413799  385781 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.031d76e7
	I0408 11:34:35.413820  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.031d76e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.254]
	I0408 11:34:35.754130  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.031d76e7 ...
	I0408 11:34:35.754176  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.031d76e7: {Name:mka209038fcbc41dcf872a310f70eacfb93fd5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.754349  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.031d76e7 ...
	I0408 11:34:35.754365  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.031d76e7: {Name:mk2c706c31402acdb212b4716cccdea007e4227c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.754435  385781 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.031d76e7 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt
	I0408 11:34:35.754508  385781 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.031d76e7 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key
	I0408 11:34:35.754559  385781 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key
	I0408 11:34:35.754574  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt with IP's: []
	I0408 11:34:35.917959  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt ...
	I0408 11:34:35.917997  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt: {Name:mk7f0e598c497fabd4116cfe31d470b2ad37afd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.918151  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key ...
	I0408 11:34:35.918164  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key: {Name:mk6bdcbb968ca03ea6fe017bc03bc5094402d346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.918236  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 11:34:35.918254  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 11:34:35.918264  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 11:34:35.918277  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 11:34:35.918289  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 11:34:35.918302  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 11:34:35.918315  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 11:34:35.918326  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 11:34:35.918380  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 11:34:35.918416  385781 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 11:34:35.918423  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:34:35.918444  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:34:35.918463  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:34:35.918480  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:34:35.918516  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:34:35.918541  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 11:34:35.918571  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:34:35.918599  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 11:34:35.919208  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:34:35.951404  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:34:35.985713  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:34:36.014075  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:34:36.047144  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 11:34:36.076634  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 11:34:36.105534  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:34:36.132785  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:34:36.161373  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 11:34:36.189785  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:34:36.217551  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 11:34:36.245896  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 11:34:36.265595  385781 ssh_runner.go:195] Run: openssl version
	I0408 11:34:36.272414  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:34:36.284650  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:34:36.290307  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:34:36.290387  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:34:36.296816  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:34:36.309837  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 11:34:36.321591  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 11:34:36.326829  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 11:34:36.326907  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 11:34:36.333615  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 11:34:36.346280  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 11:34:36.359487  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 11:34:36.365862  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 11:34:36.365971  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 11:34:36.373862  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 11:34:36.386273  385781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:34:36.391001  385781 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 11:34:36.391080  385781 kubeadm.go:391] StartCluster: {Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:34:36.391237  385781 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 11:34:36.391292  385781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 11:34:36.434257  385781 cri.go:89] found id: ""
	I0408 11:34:36.434342  385781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 11:34:36.445873  385781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 11:34:36.456984  385781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 11:34:36.468015  385781 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 11:34:36.468049  385781 kubeadm.go:156] found existing configuration files:
	
	I0408 11:34:36.468167  385781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 11:34:36.478796  385781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 11:34:36.478893  385781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 11:34:36.490758  385781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 11:34:36.502697  385781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 11:34:36.502791  385781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 11:34:36.513956  385781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 11:34:36.530576  385781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 11:34:36.530657  385781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 11:34:36.541885  385781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 11:34:36.552304  385781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 11:34:36.552378  385781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 11:34:36.563246  385781 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 11:34:36.858032  385781 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 11:34:48.354356  385781 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 11:34:48.354447  385781 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 11:34:48.354553  385781 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 11:34:48.354646  385781 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 11:34:48.354736  385781 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 11:34:48.354791  385781 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 11:34:48.356495  385781 out.go:204]   - Generating certificates and keys ...
	I0408 11:34:48.356592  385781 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 11:34:48.356651  385781 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 11:34:48.356757  385781 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 11:34:48.356839  385781 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 11:34:48.356912  385781 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 11:34:48.356993  385781 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 11:34:48.357114  385781 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 11:34:48.357255  385781 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-438604 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I0408 11:34:48.357335  385781 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 11:34:48.357500  385781 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-438604 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I0408 11:34:48.357591  385781 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 11:34:48.357685  385781 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 11:34:48.357738  385781 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 11:34:48.357811  385781 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 11:34:48.357885  385781 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 11:34:48.357958  385781 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 11:34:48.358033  385781 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 11:34:48.358132  385781 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 11:34:48.358240  385781 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 11:34:48.358344  385781 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 11:34:48.358436  385781 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 11:34:48.361023  385781 out.go:204]   - Booting up control plane ...
	I0408 11:34:48.361136  385781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 11:34:48.361230  385781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 11:34:48.361309  385781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 11:34:48.361445  385781 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 11:34:48.361597  385781 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 11:34:48.361656  385781 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 11:34:48.361820  385781 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 11:34:48.361924  385781 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.562122 seconds
	I0408 11:34:48.362018  385781 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 11:34:48.362156  385781 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 11:34:48.362234  385781 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 11:34:48.362383  385781 kubeadm.go:309] [mark-control-plane] Marking the node ha-438604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 11:34:48.362431  385781 kubeadm.go:309] [bootstrap-token] Using token: u4tba0.5qhrqha5k5ry6q7a
	I0408 11:34:48.364067  385781 out.go:204]   - Configuring RBAC rules ...
	I0408 11:34:48.364189  385781 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 11:34:48.364268  385781 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 11:34:48.364410  385781 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 11:34:48.364560  385781 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 11:34:48.364663  385781 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 11:34:48.364763  385781 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 11:34:48.364889  385781 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 11:34:48.364933  385781 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 11:34:48.364972  385781 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 11:34:48.364978  385781 kubeadm.go:309] 
	I0408 11:34:48.365029  385781 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 11:34:48.365035  385781 kubeadm.go:309] 
	I0408 11:34:48.365096  385781 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 11:34:48.365102  385781 kubeadm.go:309] 
	I0408 11:34:48.365123  385781 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 11:34:48.365174  385781 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 11:34:48.365221  385781 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 11:34:48.365228  385781 kubeadm.go:309] 
	I0408 11:34:48.365283  385781 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 11:34:48.365290  385781 kubeadm.go:309] 
	I0408 11:34:48.365333  385781 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 11:34:48.365339  385781 kubeadm.go:309] 
	I0408 11:34:48.365380  385781 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 11:34:48.365446  385781 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 11:34:48.365518  385781 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 11:34:48.365525  385781 kubeadm.go:309] 
	I0408 11:34:48.365594  385781 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 11:34:48.365661  385781 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 11:34:48.365667  385781 kubeadm.go:309] 
	I0408 11:34:48.365741  385781 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token u4tba0.5qhrqha5k5ry6q7a \
	I0408 11:34:48.365861  385781 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 11:34:48.365908  385781 kubeadm.go:309] 	--control-plane 
	I0408 11:34:48.365918  385781 kubeadm.go:309] 
	I0408 11:34:48.366019  385781 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 11:34:48.366029  385781 kubeadm.go:309] 
	I0408 11:34:48.366124  385781 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token u4tba0.5qhrqha5k5ry6q7a \
	I0408 11:34:48.366275  385781 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 11:34:48.366291  385781 cni.go:84] Creating CNI manager for ""
	I0408 11:34:48.366298  385781 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 11:34:48.369157  385781 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0408 11:34:48.370626  385781 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0408 11:34:48.381241  385781 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0408 11:34:48.381271  385781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0408 11:34:48.452826  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0408 11:34:48.917927  385781 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 11:34:48.918035  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:48.918033  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-438604 minikube.k8s.io/updated_at=2024_04_08T11_34_48_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=ha-438604 minikube.k8s.io/primary=true
	I0408 11:34:49.066865  385781 ops.go:34] apiserver oom_adj: -16
	I0408 11:34:49.067087  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:49.567180  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:50.067069  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:50.567083  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:51.067823  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:51.567813  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:52.067770  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:52.567206  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:53.067114  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:53.567893  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:54.067938  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:54.568071  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:55.067135  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:55.567949  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:56.067099  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:56.567925  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:57.067066  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:57.568107  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:58.068083  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:58.568062  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:59.067573  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:59.567185  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:35:00.067869  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:35:00.182385  385781 kubeadm.go:1107] duration metric: took 11.264461996s to wait for elevateKubeSystemPrivileges
	W0408 11:35:00.182426  385781 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 11:35:00.182434  385781 kubeadm.go:393] duration metric: took 23.791361403s to StartCluster
	I0408 11:35:00.182453  385781 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:00.182543  385781 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:35:00.183344  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:00.183589  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 11:35:00.183601  385781 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:35:00.183628  385781 start.go:240] waiting for startup goroutines ...
	I0408 11:35:00.183638  385781 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 11:35:00.183732  385781 addons.go:69] Setting storage-provisioner=true in profile "ha-438604"
	I0408 11:35:00.183741  385781 addons.go:69] Setting default-storageclass=true in profile "ha-438604"
	I0408 11:35:00.183774  385781 addons.go:234] Setting addon storage-provisioner=true in "ha-438604"
	I0408 11:35:00.183789  385781 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-438604"
	I0408 11:35:00.183812  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:35:00.183838  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:35:00.184267  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.184318  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.184337  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.184346  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.200332  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0408 11:35:00.200350  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
	I0408 11:35:00.200827  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.200907  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.201440  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.201468  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.201499  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.201521  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.201830  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.201938  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.202173  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:35:00.202469  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.202502  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.204679  385781 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:35:00.205057  385781 kapi.go:59] client config for ha-438604: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt", KeyFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key", CAFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5db80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 11:35:00.205626  385781 cert_rotation.go:137] Starting client certificate rotation controller
	I0408 11:35:00.205976  385781 addons.go:234] Setting addon default-storageclass=true in "ha-438604"
	I0408 11:35:00.206027  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:35:00.206426  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.206465  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.220086  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0408 11:35:00.220600  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.221285  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.221318  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.221738  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.221994  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:35:00.223349  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0408 11:35:00.223736  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:35:00.223952  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.226265  385781 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 11:35:00.224435  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.226325  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.226781  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.227873  385781 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 11:35:00.227895  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 11:35:00.227921  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:35:00.228350  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.228379  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.231520  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:00.232067  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:35:00.232100  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:00.232401  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:35:00.232621  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:35:00.232894  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:35:00.233158  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:35:00.245841  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0408 11:35:00.246406  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.247061  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.247085  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.247451  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.247720  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:35:00.249556  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:35:00.249900  385781 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 11:35:00.249917  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 11:35:00.249946  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:35:00.252598  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:00.253036  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:35:00.253064  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:00.253198  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:35:00.253399  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:35:00.253575  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:35:00.253732  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:35:00.394744  385781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 11:35:00.396448  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 11:35:00.406682  385781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 11:35:01.102749  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.102783  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.102798  385781 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0408 11:35:01.102901  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.102925  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.103100  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.103139  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.103160  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.103255  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.103288  385781 main.go:141] libmachine: (ha-438604) DBG | Closing plugin on server side
	I0408 11:35:01.103306  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.103360  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.103372  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.103383  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.103473  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.103552  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.103552  385781 main.go:141] libmachine: (ha-438604) DBG | Closing plugin on server side
	I0408 11:35:01.103660  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.103675  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.103814  385781 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0408 11:35:01.103824  385781 round_trippers.go:469] Request Headers:
	I0408 11:35:01.103833  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:35:01.103838  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:35:01.115283  385781 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0408 11:35:01.116011  385781 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0408 11:35:01.116029  385781 round_trippers.go:469] Request Headers:
	I0408 11:35:01.116036  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:35:01.116039  385781 round_trippers.go:473]     Content-Type: application/json
	I0408 11:35:01.116042  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:35:01.119042  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:35:01.119194  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.119207  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.119533  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.119555  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.121499  385781 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0408 11:35:01.122818  385781 addons.go:505] duration metric: took 939.178087ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0408 11:35:01.122859  385781 start.go:245] waiting for cluster config update ...
	I0408 11:35:01.122872  385781 start.go:254] writing updated cluster config ...
	I0408 11:35:01.124696  385781 out.go:177] 
	I0408 11:35:01.126381  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:35:01.126462  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:35:01.128231  385781 out.go:177] * Starting "ha-438604-m02" control-plane node in "ha-438604" cluster
	I0408 11:35:01.129662  385781 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:35:01.129691  385781 cache.go:56] Caching tarball of preloaded images
	I0408 11:35:01.129771  385781 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:35:01.129784  385781 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:35:01.129858  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:35:01.130021  385781 start.go:360] acquireMachinesLock for ha-438604-m02: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:35:01.130063  385781 start.go:364] duration metric: took 22.772µs to acquireMachinesLock for "ha-438604-m02"
	I0408 11:35:01.130080  385781 start.go:93] Provisioning new machine with config: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:35:01.130139  385781 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0408 11:35:01.132021  385781 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:35:01.132114  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:01.132139  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:01.147409  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46277
	I0408 11:35:01.148043  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:01.148509  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:01.148536  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:01.148887  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:01.149103  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetMachineName
	I0408 11:35:01.149232  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:01.149418  385781 start.go:159] libmachine.API.Create for "ha-438604" (driver="kvm2")
	I0408 11:35:01.149446  385781 client.go:168] LocalClient.Create starting
	I0408 11:35:01.149487  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 11:35:01.149527  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:35:01.149553  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:35:01.149608  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 11:35:01.149627  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:35:01.149638  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:35:01.149651  385781 main.go:141] libmachine: Running pre-create checks...
	I0408 11:35:01.149659  385781 main.go:141] libmachine: (ha-438604-m02) Calling .PreCreateCheck
	I0408 11:35:01.149830  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetConfigRaw
	I0408 11:35:01.150253  385781 main.go:141] libmachine: Creating machine...
	I0408 11:35:01.150270  385781 main.go:141] libmachine: (ha-438604-m02) Calling .Create
	I0408 11:35:01.150418  385781 main.go:141] libmachine: (ha-438604-m02) Creating KVM machine...
	I0408 11:35:01.151655  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found existing default KVM network
	I0408 11:35:01.151839  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found existing private KVM network mk-ha-438604
	I0408 11:35:01.151976  385781 main.go:141] libmachine: (ha-438604-m02) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02 ...
	I0408 11:35:01.152002  385781 main.go:141] libmachine: (ha-438604-m02) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:35:01.152042  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:01.151946  386193 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:35:01.152146  385781 main.go:141] libmachine: (ha-438604-m02) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 11:35:01.392132  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:01.392002  386193 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa...
	I0408 11:35:01.681870  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:01.681715  386193 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/ha-438604-m02.rawdisk...
	I0408 11:35:01.681916  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Writing magic tar header
	I0408 11:35:01.681930  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Writing SSH key tar header
	I0408 11:35:01.681943  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:01.681843  386193 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02 ...
	I0408 11:35:01.681959  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02
	I0408 11:35:01.682049  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02 (perms=drwx------)
	I0408 11:35:01.682081  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 11:35:01.682092  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 11:35:01.682108  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 11:35:01.682122  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:35:01.682133  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 11:35:01.682148  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 11:35:01.682159  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 11:35:01.682170  385781 main.go:141] libmachine: (ha-438604-m02) Creating domain...
	I0408 11:35:01.682185  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 11:35:01.682201  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 11:35:01.682215  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins
	I0408 11:35:01.682227  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home
	I0408 11:35:01.682237  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Skipping /home - not owner
	I0408 11:35:01.683239  385781 main.go:141] libmachine: (ha-438604-m02) define libvirt domain using xml: 
	I0408 11:35:01.683263  385781 main.go:141] libmachine: (ha-438604-m02) <domain type='kvm'>
	I0408 11:35:01.683271  385781 main.go:141] libmachine: (ha-438604-m02)   <name>ha-438604-m02</name>
	I0408 11:35:01.683276  385781 main.go:141] libmachine: (ha-438604-m02)   <memory unit='MiB'>2200</memory>
	I0408 11:35:01.683282  385781 main.go:141] libmachine: (ha-438604-m02)   <vcpu>2</vcpu>
	I0408 11:35:01.683286  385781 main.go:141] libmachine: (ha-438604-m02)   <features>
	I0408 11:35:01.683291  385781 main.go:141] libmachine: (ha-438604-m02)     <acpi/>
	I0408 11:35:01.683296  385781 main.go:141] libmachine: (ha-438604-m02)     <apic/>
	I0408 11:35:01.683301  385781 main.go:141] libmachine: (ha-438604-m02)     <pae/>
	I0408 11:35:01.683305  385781 main.go:141] libmachine: (ha-438604-m02)     
	I0408 11:35:01.683316  385781 main.go:141] libmachine: (ha-438604-m02)   </features>
	I0408 11:35:01.683336  385781 main.go:141] libmachine: (ha-438604-m02)   <cpu mode='host-passthrough'>
	I0408 11:35:01.683346  385781 main.go:141] libmachine: (ha-438604-m02)   
	I0408 11:35:01.683357  385781 main.go:141] libmachine: (ha-438604-m02)   </cpu>
	I0408 11:35:01.683378  385781 main.go:141] libmachine: (ha-438604-m02)   <os>
	I0408 11:35:01.683400  385781 main.go:141] libmachine: (ha-438604-m02)     <type>hvm</type>
	I0408 11:35:01.683415  385781 main.go:141] libmachine: (ha-438604-m02)     <boot dev='cdrom'/>
	I0408 11:35:01.683422  385781 main.go:141] libmachine: (ha-438604-m02)     <boot dev='hd'/>
	I0408 11:35:01.683432  385781 main.go:141] libmachine: (ha-438604-m02)     <bootmenu enable='no'/>
	I0408 11:35:01.683446  385781 main.go:141] libmachine: (ha-438604-m02)   </os>
	I0408 11:35:01.683458  385781 main.go:141] libmachine: (ha-438604-m02)   <devices>
	I0408 11:35:01.683470  385781 main.go:141] libmachine: (ha-438604-m02)     <disk type='file' device='cdrom'>
	I0408 11:35:01.683491  385781 main.go:141] libmachine: (ha-438604-m02)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/boot2docker.iso'/>
	I0408 11:35:01.683506  385781 main.go:141] libmachine: (ha-438604-m02)       <target dev='hdc' bus='scsi'/>
	I0408 11:35:01.683519  385781 main.go:141] libmachine: (ha-438604-m02)       <readonly/>
	I0408 11:35:01.683529  385781 main.go:141] libmachine: (ha-438604-m02)     </disk>
	I0408 11:35:01.683539  385781 main.go:141] libmachine: (ha-438604-m02)     <disk type='file' device='disk'>
	I0408 11:35:01.683551  385781 main.go:141] libmachine: (ha-438604-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 11:35:01.683568  385781 main.go:141] libmachine: (ha-438604-m02)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/ha-438604-m02.rawdisk'/>
	I0408 11:35:01.683583  385781 main.go:141] libmachine: (ha-438604-m02)       <target dev='hda' bus='virtio'/>
	I0408 11:35:01.683596  385781 main.go:141] libmachine: (ha-438604-m02)     </disk>
	I0408 11:35:01.683607  385781 main.go:141] libmachine: (ha-438604-m02)     <interface type='network'>
	I0408 11:35:01.683620  385781 main.go:141] libmachine: (ha-438604-m02)       <source network='mk-ha-438604'/>
	I0408 11:35:01.683630  385781 main.go:141] libmachine: (ha-438604-m02)       <model type='virtio'/>
	I0408 11:35:01.683638  385781 main.go:141] libmachine: (ha-438604-m02)     </interface>
	I0408 11:35:01.683649  385781 main.go:141] libmachine: (ha-438604-m02)     <interface type='network'>
	I0408 11:35:01.683661  385781 main.go:141] libmachine: (ha-438604-m02)       <source network='default'/>
	I0408 11:35:01.683670  385781 main.go:141] libmachine: (ha-438604-m02)       <model type='virtio'/>
	I0408 11:35:01.683680  385781 main.go:141] libmachine: (ha-438604-m02)     </interface>
	I0408 11:35:01.683703  385781 main.go:141] libmachine: (ha-438604-m02)     <serial type='pty'>
	I0408 11:35:01.683715  385781 main.go:141] libmachine: (ha-438604-m02)       <target port='0'/>
	I0408 11:35:01.683727  385781 main.go:141] libmachine: (ha-438604-m02)     </serial>
	I0408 11:35:01.683737  385781 main.go:141] libmachine: (ha-438604-m02)     <console type='pty'>
	I0408 11:35:01.683750  385781 main.go:141] libmachine: (ha-438604-m02)       <target type='serial' port='0'/>
	I0408 11:35:01.683763  385781 main.go:141] libmachine: (ha-438604-m02)     </console>
	I0408 11:35:01.683798  385781 main.go:141] libmachine: (ha-438604-m02)     <rng model='virtio'>
	I0408 11:35:01.683825  385781 main.go:141] libmachine: (ha-438604-m02)       <backend model='random'>/dev/random</backend>
	I0408 11:35:01.683836  385781 main.go:141] libmachine: (ha-438604-m02)     </rng>
	I0408 11:35:01.683846  385781 main.go:141] libmachine: (ha-438604-m02)     
	I0408 11:35:01.683857  385781 main.go:141] libmachine: (ha-438604-m02)     
	I0408 11:35:01.683863  385781 main.go:141] libmachine: (ha-438604-m02)   </devices>
	I0408 11:35:01.683894  385781 main.go:141] libmachine: (ha-438604-m02) </domain>
	I0408 11:35:01.683911  385781 main.go:141] libmachine: (ha-438604-m02) 
	I0408 11:35:01.690956  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:23:75:de in network default
	I0408 11:35:01.691632  385781 main.go:141] libmachine: (ha-438604-m02) Ensuring networks are active...
	I0408 11:35:01.691663  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:01.692420  385781 main.go:141] libmachine: (ha-438604-m02) Ensuring network default is active
	I0408 11:35:01.692663  385781 main.go:141] libmachine: (ha-438604-m02) Ensuring network mk-ha-438604 is active
	I0408 11:35:01.693031  385781 main.go:141] libmachine: (ha-438604-m02) Getting domain xml...
	I0408 11:35:01.693796  385781 main.go:141] libmachine: (ha-438604-m02) Creating domain...
	I0408 11:35:02.958387  385781 main.go:141] libmachine: (ha-438604-m02) Waiting to get IP...
	I0408 11:35:02.959455  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:02.959978  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:02.960023  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:02.959949  386193 retry.go:31] will retry after 261.150221ms: waiting for machine to come up
	I0408 11:35:03.222433  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:03.222924  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:03.222948  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:03.222873  386193 retry.go:31] will retry after 338.774375ms: waiting for machine to come up
	I0408 11:35:03.563954  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:03.564602  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:03.564631  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:03.564553  386193 retry.go:31] will retry after 443.047947ms: waiting for machine to come up
	I0408 11:35:04.009061  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:04.009635  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:04.009666  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:04.009556  386193 retry.go:31] will retry after 435.72415ms: waiting for machine to come up
	I0408 11:35:04.447396  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:04.447952  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:04.447991  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:04.447873  386193 retry.go:31] will retry after 565.812097ms: waiting for machine to come up
	I0408 11:35:05.015745  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:05.016316  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:05.016374  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:05.016267  386193 retry.go:31] will retry after 728.831545ms: waiting for machine to come up
	I0408 11:35:05.746267  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:05.746722  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:05.746747  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:05.746684  386193 retry.go:31] will retry after 883.417203ms: waiting for machine to come up
	I0408 11:35:06.632192  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:06.632711  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:06.632752  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:06.632653  386193 retry.go:31] will retry after 1.443827675s: waiting for machine to come up
	I0408 11:35:08.078256  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:08.078710  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:08.078743  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:08.078693  386193 retry.go:31] will retry after 1.582710551s: waiting for machine to come up
	I0408 11:35:09.663511  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:09.664043  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:09.664087  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:09.663968  386193 retry.go:31] will retry after 1.808371147s: waiting for machine to come up
	I0408 11:35:11.474372  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:11.474814  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:11.474841  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:11.474752  386193 retry.go:31] will retry after 2.023384632s: waiting for machine to come up
	I0408 11:35:13.500588  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:13.501181  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:13.501208  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:13.501125  386193 retry.go:31] will retry after 2.843950856s: waiting for machine to come up
	I0408 11:35:16.347031  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:16.347506  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:16.347537  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:16.347467  386193 retry.go:31] will retry after 3.702430785s: waiting for machine to come up
	I0408 11:35:20.051340  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:20.051762  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:20.051824  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:20.051746  386193 retry.go:31] will retry after 3.602659027s: waiting for machine to come up
	I0408 11:35:23.657430  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.658004  385781 main.go:141] libmachine: (ha-438604-m02) Found IP for machine: 192.168.39.219
	I0408 11:35:23.658029  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has current primary IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.658036  385781 main.go:141] libmachine: (ha-438604-m02) Reserving static IP address...
	I0408 11:35:23.658598  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find host DHCP lease matching {name: "ha-438604-m02", mac: "52:54:00:b9:2b:19", ip: "192.168.39.219"} in network mk-ha-438604
	I0408 11:35:23.735106  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Getting to WaitForSSH function...
	I0408 11:35:23.735145  385781 main.go:141] libmachine: (ha-438604-m02) Reserved static IP address: 192.168.39.219
	I0408 11:35:23.735159  385781 main.go:141] libmachine: (ha-438604-m02) Waiting for SSH to be available...
	I0408 11:35:23.738077  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.738536  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:23.738569  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.738646  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Using SSH client type: external
	I0408 11:35:23.738695  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa (-rw-------)
	I0408 11:35:23.738734  385781 main.go:141] libmachine: (ha-438604-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.219 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 11:35:23.738757  385781 main.go:141] libmachine: (ha-438604-m02) DBG | About to run SSH command:
	I0408 11:35:23.738843  385781 main.go:141] libmachine: (ha-438604-m02) DBG | exit 0
	I0408 11:35:23.867834  385781 main.go:141] libmachine: (ha-438604-m02) DBG | SSH cmd err, output: <nil>: 
	I0408 11:35:23.868060  385781 main.go:141] libmachine: (ha-438604-m02) KVM machine creation complete!
	I0408 11:35:23.868417  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetConfigRaw
	I0408 11:35:23.868987  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:23.869221  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:23.869442  385781 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 11:35:23.869458  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:35:23.870907  385781 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 11:35:23.870922  385781 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 11:35:23.870929  385781 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 11:35:23.870939  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:23.873288  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.873752  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:23.873781  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.873904  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:23.874122  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:23.874287  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:23.874431  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:23.874618  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:23.874915  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:23.874936  385781 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 11:35:23.987508  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:35:23.987555  385781 main.go:141] libmachine: Detecting the provisioner...
	I0408 11:35:23.987567  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:23.990503  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.990872  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:23.990902  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.991019  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:23.991290  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:23.991498  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:23.991673  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:23.991936  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:23.992170  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:23.992184  385781 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 11:35:24.104835  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 11:35:24.104924  385781 main.go:141] libmachine: found compatible host: buildroot
	I0408 11:35:24.104935  385781 main.go:141] libmachine: Provisioning with buildroot...
	I0408 11:35:24.104947  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetMachineName
	I0408 11:35:24.105225  385781 buildroot.go:166] provisioning hostname "ha-438604-m02"
	I0408 11:35:24.105261  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetMachineName
	I0408 11:35:24.105530  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.108554  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.108963  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.108996  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.109111  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:24.109348  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.109545  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.109754  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:24.109975  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:24.110195  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:24.110210  385781 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-438604-m02 && echo "ha-438604-m02" | sudo tee /etc/hostname
	I0408 11:35:24.234521  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604-m02
	
	I0408 11:35:24.234559  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.237824  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.238241  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.238272  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.238517  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:24.238741  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.238952  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.239097  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:24.239278  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:24.239450  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:24.239485  385781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-438604-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-438604-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-438604-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:35:24.362242  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:35:24.362280  385781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:35:24.362298  385781 buildroot.go:174] setting up certificates
	I0408 11:35:24.362311  385781 provision.go:84] configureAuth start
	I0408 11:35:24.362321  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetMachineName
	I0408 11:35:24.362659  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:35:24.365655  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.366126  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.366170  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.366343  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.369125  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.369439  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.369464  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.369641  385781 provision.go:143] copyHostCerts
	I0408 11:35:24.369673  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:35:24.369705  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 11:35:24.369714  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:35:24.369795  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:35:24.369881  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:35:24.369899  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 11:35:24.369907  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:35:24.369929  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:35:24.369984  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:35:24.370012  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 11:35:24.370018  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:35:24.370053  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:35:24.370132  385781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.ha-438604-m02 san=[127.0.0.1 192.168.39.219 ha-438604-m02 localhost minikube]
	I0408 11:35:24.565808  385781 provision.go:177] copyRemoteCerts
	I0408 11:35:24.565885  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:35:24.565921  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.568808  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.569116  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.569151  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.569313  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:24.569531  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.569725  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:24.569861  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:35:24.659113  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 11:35:24.659185  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:35:24.686861  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 11:35:24.686942  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 11:35:24.714397  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 11:35:24.714472  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 11:35:24.740893  385781 provision.go:87] duration metric: took 378.567432ms to configureAuth
	I0408 11:35:24.740932  385781 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:35:24.741131  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:35:24.741251  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.744030  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.744384  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.744419  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.744618  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:24.744839  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.745029  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.745181  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:24.745369  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:24.745557  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:24.745573  385781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:35:25.029666  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:35:25.029709  385781 main.go:141] libmachine: Checking connection to Docker...
	I0408 11:35:25.029721  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetURL
	I0408 11:35:25.031297  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Using libvirt version 6000000
	I0408 11:35:25.033496  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.033854  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.033888  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.034045  385781 main.go:141] libmachine: Docker is up and running!
	I0408 11:35:25.034063  385781 main.go:141] libmachine: Reticulating splines...
	I0408 11:35:25.034072  385781 client.go:171] duration metric: took 23.884615127s to LocalClient.Create
	I0408 11:35:25.034102  385781 start.go:167] duration metric: took 23.884683605s to libmachine.API.Create "ha-438604"
	I0408 11:35:25.034115  385781 start.go:293] postStartSetup for "ha-438604-m02" (driver="kvm2")
	I0408 11:35:25.034132  385781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:35:25.034159  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.034439  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:35:25.034467  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:25.036530  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.036862  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.036890  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.037039  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:25.037302  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.037493  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:25.037655  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:35:25.127334  385781 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:35:25.132049  385781 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:35:25.132086  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:35:25.132166  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:35:25.132247  385781 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 11:35:25.132258  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 11:35:25.132340  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 11:35:25.143186  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:35:25.168828  385781 start.go:296] duration metric: took 134.691063ms for postStartSetup
	I0408 11:35:25.168896  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetConfigRaw
	I0408 11:35:25.169508  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:35:25.172095  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.172517  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.172549  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.172752  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:35:25.172963  385781 start.go:128] duration metric: took 24.042813058s to createHost
	I0408 11:35:25.172988  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:25.175491  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.175816  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.175849  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.176039  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:25.176289  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.176489  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.176688  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:25.176859  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:25.177080  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:25.177094  385781 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:35:25.289062  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576125.259373380
	
	I0408 11:35:25.289094  385781 fix.go:216] guest clock: 1712576125.259373380
	I0408 11:35:25.289110  385781 fix.go:229] Guest: 2024-04-08 11:35:25.25937338 +0000 UTC Remote: 2024-04-08 11:35:25.172976644 +0000 UTC m=+83.156025480 (delta=86.396736ms)
	I0408 11:35:25.289132  385781 fix.go:200] guest clock delta is within tolerance: 86.396736ms
	I0408 11:35:25.289140  385781 start.go:83] releasing machines lock for "ha-438604-m02", held for 24.15906757s
	I0408 11:35:25.289169  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.289462  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:35:25.292050  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.292434  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.292462  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.295135  385781 out.go:177] * Found network options:
	I0408 11:35:25.296672  385781 out.go:177]   - NO_PROXY=192.168.39.99
	W0408 11:35:25.298045  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 11:35:25.298077  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.298663  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.298889  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.298977  385781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:35:25.299025  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	W0408 11:35:25.299347  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 11:35:25.299425  385781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:35:25.299447  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:25.301963  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.302231  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.302325  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.302355  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.302521  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:25.302605  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.302633  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.302738  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.302808  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:25.302949  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:25.302958  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.303128  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:25.303181  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:35:25.303259  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:35:25.543966  385781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:35:25.550799  385781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:35:25.550871  385781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:35:25.568455  385781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 11:35:25.568492  385781 start.go:494] detecting cgroup driver to use...
	I0408 11:35:25.568573  385781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:35:25.588994  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:35:25.605132  385781 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:35:25.605214  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:35:25.620512  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:35:25.636154  385781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:35:25.757479  385781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:35:25.896785  385781 docker.go:233] disabling docker service ...
	I0408 11:35:25.896866  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:35:25.912910  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:35:25.926867  385781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:35:26.076910  385781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:35:26.219444  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:35:26.234391  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:35:26.254212  385781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:35:26.254293  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.266948  385781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:35:26.267033  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.279161  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.290792  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.302547  385781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:35:26.314375  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.325941  385781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.344703  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.357000  385781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:35:26.367883  385781 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 11:35:26.367961  385781 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 11:35:26.383007  385781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:35:26.394174  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:35:26.535534  385781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:35:26.689603  385781 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:35:26.689697  385781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:35:26.694793  385781 start.go:562] Will wait 60s for crictl version
	I0408 11:35:26.694856  385781 ssh_runner.go:195] Run: which crictl
	I0408 11:35:26.698809  385781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:35:26.737497  385781 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:35:26.737591  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:35:26.767566  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:35:26.799948  385781 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:35:26.801685  385781 out.go:177]   - env NO_PROXY=192.168.39.99
	I0408 11:35:26.803419  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:35:26.806533  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:26.806893  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:26.806934  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:26.807121  385781 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:35:26.811543  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:35:26.824447  385781 mustload.go:65] Loading cluster: ha-438604
	I0408 11:35:26.824673  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:35:26.824942  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:26.824971  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:26.840692  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0408 11:35:26.841177  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:26.841729  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:26.841756  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:26.842116  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:26.842360  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:35:26.843929  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:35:26.844297  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:26.844324  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:26.859195  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0408 11:35:26.859669  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:26.860201  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:26.860232  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:26.860680  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:26.860896  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:35:26.861190  385781 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604 for IP: 192.168.39.219
	I0408 11:35:26.861206  385781 certs.go:194] generating shared ca certs ...
	I0408 11:35:26.861223  385781 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:26.861413  385781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:35:26.861462  385781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:35:26.861476  385781 certs.go:256] generating profile certs ...
	I0408 11:35:26.861593  385781 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key
	I0408 11:35:26.861627  385781 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.820e8a5f
	I0408 11:35:26.861649  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.820e8a5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.219 192.168.39.254]
	I0408 11:35:26.945516  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.820e8a5f ...
	I0408 11:35:26.945554  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.820e8a5f: {Name:mk2fa49de500562c209edfcdad78aac14f2fcad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:26.945764  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.820e8a5f ...
	I0408 11:35:26.945788  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.820e8a5f: {Name:mka54ad1fc6dd7a6cccca4f8741d6cd51c1a29d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:26.945884  385781 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.820e8a5f -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt
	I0408 11:35:26.946053  385781 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.820e8a5f -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key
	I0408 11:35:26.946246  385781 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key
	I0408 11:35:26.946271  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 11:35:26.946285  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 11:35:26.946295  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 11:35:26.946308  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 11:35:26.946322  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 11:35:26.946337  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 11:35:26.946354  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 11:35:26.946370  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 11:35:26.946437  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 11:35:26.946478  385781 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 11:35:26.946491  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:35:26.946520  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:35:26.946549  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:35:26.946584  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:35:26.946635  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:35:26.946674  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 11:35:26.946696  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 11:35:26.946710  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:35:26.946761  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:35:26.950107  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:26.950489  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:35:26.950519  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:26.950649  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:35:26.950865  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:35:26.951078  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:35:26.951244  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:35:27.032139  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0408 11:35:27.037846  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0408 11:35:27.049435  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0408 11:35:27.054099  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0408 11:35:27.067647  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0408 11:35:27.075508  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0408 11:35:27.090104  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0408 11:35:27.094859  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0408 11:35:27.106927  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0408 11:35:27.112469  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0408 11:35:27.125838  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0408 11:35:27.130420  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0408 11:35:27.142630  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:35:27.169237  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:35:27.195177  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:35:27.220637  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:35:27.246050  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0408 11:35:27.271158  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 11:35:27.297173  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:35:27.322364  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:35:27.348427  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 11:35:27.374612  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 11:35:27.401527  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:35:27.428324  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0408 11:35:27.446364  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0408 11:35:27.463903  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0408 11:35:27.482292  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0408 11:35:27.500782  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0408 11:35:27.518790  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0408 11:35:27.537117  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0408 11:35:27.555554  385781 ssh_runner.go:195] Run: openssl version
	I0408 11:35:27.561414  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 11:35:27.572343  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 11:35:27.577056  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 11:35:27.577129  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 11:35:27.582910  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 11:35:27.593852  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 11:35:27.605057  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 11:35:27.609452  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 11:35:27.609519  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 11:35:27.615618  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 11:35:27.627111  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:35:27.639164  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:35:27.644102  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:35:27.644161  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:35:27.649966  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:35:27.661463  385781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:35:27.665724  385781 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 11:35:27.665778  385781 kubeadm.go:928] updating node {m02 192.168.39.219 8443 v1.29.3 crio true true} ...
	I0408 11:35:27.665885  385781 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-438604-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:35:27.665924  385781 kube-vip.go:111] generating kube-vip config ...
	I0408 11:35:27.665967  385781 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 11:35:27.684390  385781 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0408 11:35:27.684478  385781 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 11:35:27.684558  385781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:35:27.695337  385781 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0408 11:35:27.695416  385781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0408 11:35:27.705672  385781 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0408 11:35:27.705685  385781 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0408 11:35:27.705740  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0408 11:35:27.705692  385781 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0408 11:35:27.705833  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0408 11:35:27.710620  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0408 11:35:27.710649  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0408 11:35:29.600107  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0408 11:35:29.600205  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0408 11:35:29.605583  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0408 11:35:29.605627  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0408 11:36:01.397808  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:36:01.418473  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0408 11:36:01.418592  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0408 11:36:01.424158  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0408 11:36:01.424199  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0408 11:36:01.935410  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0408 11:36:01.946923  385781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0408 11:36:01.965291  385781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:36:01.983989  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0408 11:36:02.002710  385781 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0408 11:36:02.007428  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:36:02.021446  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:36:02.160368  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:36:02.180428  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:36:02.180967  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:36:02.181029  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:36:02.196781  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I0408 11:36:02.197389  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:36:02.198141  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:36:02.198159  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:36:02.198619  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:36:02.198891  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:36:02.199131  385781 start.go:316] joinCluster: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:36:02.199260  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0408 11:36:02.199281  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:36:02.202792  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:36:02.203299  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:36:02.203328  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:36:02.203643  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:36:02.203852  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:36:02.204105  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:36:02.204288  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:36:02.373316  385781 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:36:02.373373  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq7sng.a232pzw4qrf0cj6i --discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-438604-m02 --control-plane --apiserver-advertise-address=192.168.39.219 --apiserver-bind-port=8443"
	I0408 11:36:27.381682  385781 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq7sng.a232pzw4qrf0cj6i --discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-438604-m02 --control-plane --apiserver-advertise-address=192.168.39.219 --apiserver-bind-port=8443": (25.008277641s)
	I0408 11:36:27.381729  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0408 11:36:27.804605  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-438604-m02 minikube.k8s.io/updated_at=2024_04_08T11_36_27_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=ha-438604 minikube.k8s.io/primary=false
	I0408 11:36:27.944930  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-438604-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0408 11:36:28.059560  385781 start.go:318] duration metric: took 25.860422388s to joinCluster
	I0408 11:36:28.059655  385781 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:36:28.061481  385781 out.go:177] * Verifying Kubernetes components...
	I0408 11:36:28.060076  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:36:28.062973  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:36:28.222019  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:36:28.242681  385781 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:36:28.242963  385781 kapi.go:59] client config for ha-438604: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt", KeyFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key", CAFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5db80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0408 11:36:28.243034  385781 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I0408 11:36:28.243247  385781 node_ready.go:35] waiting up to 6m0s for node "ha-438604-m02" to be "Ready" ...
	I0408 11:36:28.243422  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:28.243435  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:28.243445  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:28.243451  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:28.253757  385781 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0408 11:36:28.743566  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:28.743591  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:28.743600  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:28.743604  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:28.747081  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:29.244195  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:29.244221  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:29.244230  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:29.244234  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:29.248119  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:29.744435  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:29.744457  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:29.744466  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:29.744470  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:29.755676  385781 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0408 11:36:30.244065  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:30.244092  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:30.244100  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:30.244104  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:30.247540  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:30.248415  385781 node_ready.go:53] node "ha-438604-m02" has status "Ready":"False"
	I0408 11:36:30.743602  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:30.743636  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:30.743647  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:30.743654  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:30.748477  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:31.244499  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:31.244533  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:31.244544  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:31.244550  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:31.248385  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:31.744452  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:31.744512  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:31.744525  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:31.744531  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:31.748568  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:32.244258  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:32.244284  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:32.244294  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:32.244301  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:32.249131  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:32.249751  385781 node_ready.go:53] node "ha-438604-m02" has status "Ready":"False"
	I0408 11:36:32.744232  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:32.744256  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:32.744264  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:32.744268  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:32.748509  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:33.243777  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:33.243804  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:33.243815  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:33.243822  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:33.248010  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:33.743860  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:33.743891  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:33.743903  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:33.743909  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:33.754454  385781 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0408 11:36:34.243482  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:34.243525  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.243536  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.243542  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.249036  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:36:34.251293  385781 node_ready.go:53] node "ha-438604-m02" has status "Ready":"False"
	I0408 11:36:34.743650  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:34.743678  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.743703  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.743709  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.747472  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.748289  385781 node_ready.go:49] node "ha-438604-m02" has status "Ready":"True"
	I0408 11:36:34.748316  385781 node_ready.go:38] duration metric: took 6.505051931s for node "ha-438604-m02" to be "Ready" ...
	I0408 11:36:34.748339  385781 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:36:34.748424  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:34.748436  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.748447  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.748453  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.754379  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:36:34.760411  385781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7gpzq" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.760504  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-7gpzq
	I0408 11:36:34.760509  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.760516  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.760523  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.764292  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.764880  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:34.764895  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.764902  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.764907  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.767984  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.768572  385781 pod_ready.go:92] pod "coredns-76f75df574-7gpzq" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:34.768595  385781 pod_ready.go:81] duration metric: took 8.155667ms for pod "coredns-76f75df574-7gpzq" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.768605  385781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wqrvc" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.768662  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-wqrvc
	I0408 11:36:34.768670  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.768677  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.768681  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.773329  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:34.773967  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:34.773984  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.773991  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.773994  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.780542  385781 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0408 11:36:34.781119  385781 pod_ready.go:92] pod "coredns-76f75df574-wqrvc" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:34.781142  385781 pod_ready.go:81] duration metric: took 12.529681ms for pod "coredns-76f75df574-wqrvc" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.781157  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.781230  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604
	I0408 11:36:34.781241  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.781251  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.781257  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.784634  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.785244  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:34.785260  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.785267  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.785272  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.788038  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:34.788594  385781 pod_ready.go:92] pod "etcd-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:34.788613  385781 pod_ready.go:81] duration metric: took 7.449373ms for pod "etcd-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.788623  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.788676  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:34.788684  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.788690  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.788695  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.791720  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.792508  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:34.792536  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.792544  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.792548  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.794924  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:35.288893  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:35.288933  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:35.288945  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:35.288951  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:35.293036  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:35.294052  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:35.294068  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:35.294076  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:35.294079  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:35.297225  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:35.789111  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:35.789138  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:35.789145  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:35.789150  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:35.792783  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:35.793601  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:35.793616  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:35.793624  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:35.793629  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:35.796417  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:36.289582  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:36.289611  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:36.289626  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:36.289633  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:36.293285  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:36.293901  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:36.293918  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:36.293926  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:36.293929  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:36.296833  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:36.788843  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:36.788874  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:36.788882  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:36.788886  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:36.793133  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:36.794171  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:36.794186  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:36.794194  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:36.794197  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:36.797235  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:36.797863  385781 pod_ready.go:102] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"False"
	I0408 11:36:37.289391  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:37.289419  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:37.289430  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:37.289434  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:37.293155  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:37.293980  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:37.293999  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:37.294007  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:37.294011  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:37.296987  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:37.789029  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:37.789059  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:37.789067  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:37.789070  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:37.793365  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:37.794092  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:37.794108  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:37.794116  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:37.794119  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:37.797369  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:38.289260  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:38.289285  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:38.289293  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:38.289296  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:38.292902  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:38.293678  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:38.293693  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:38.293701  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:38.293704  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:38.296355  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:38.789345  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:38.789373  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:38.789385  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:38.789393  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:38.793214  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:38.794044  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:38.794060  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:38.794068  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:38.794072  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:38.797384  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:38.798205  385781 pod_ready.go:102] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"False"
	I0408 11:36:39.289125  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:39.289146  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:39.289155  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:39.289158  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:39.293122  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:39.293795  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:39.293812  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:39.293820  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:39.293823  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:39.296538  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:39.789721  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:39.789751  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:39.789760  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:39.789764  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:39.793599  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:39.794544  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:39.794563  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:39.794572  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:39.794578  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:39.797939  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:40.289517  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:40.289545  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:40.289554  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:40.289559  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:40.293709  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:40.294341  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:40.294360  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:40.294367  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:40.294371  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:40.297903  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:40.788867  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:40.788895  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:40.788904  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:40.788909  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:40.792873  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:40.793477  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:40.793519  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:40.793534  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:40.793540  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:40.796818  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.289529  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:41.289557  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.289565  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.289570  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.294522  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:41.295448  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.295465  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.295473  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.295478  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.299189  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.299774  385781 pod_ready.go:102] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"False"
	I0408 11:36:41.789182  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:41.789215  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.789227  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.789234  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.793831  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:41.794616  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.794637  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.794653  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.794660  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.798274  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.798979  385781 pod_ready.go:92] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.799008  385781 pod_ready.go:81] duration metric: took 7.0103782s for pod "etcd-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.799031  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.799113  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604
	I0408 11:36:41.799125  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.799136  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.799142  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.802293  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.803080  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:41.803098  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.803106  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.803110  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.806600  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.807195  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.807218  385781 pod_ready.go:81] duration metric: took 8.178645ms for pod "kube-apiserver-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.807229  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.807297  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m02
	I0408 11:36:41.807308  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.807317  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.807331  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.810383  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.811020  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.811034  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.811041  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.811046  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.813960  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:41.814540  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.814558  385781 pod_ready.go:81] duration metric: took 7.322437ms for pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.814568  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.814624  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604
	I0408 11:36:41.814631  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.814638  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.814642  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.817199  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:41.818052  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:41.818067  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.818073  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.818076  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.820761  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:41.821564  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.821584  385781 pod_ready.go:81] duration metric: took 7.008859ms for pod "kube-controller-manager-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.821594  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.821643  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m02
	I0408 11:36:41.821651  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.821658  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.821663  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.824384  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:41.943994  385781 request.go:629] Waited for 118.909495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.944065  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.944070  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.944077  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.944080  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.947809  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.948434  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.948461  385781 pod_ready.go:81] duration metric: took 126.859334ms for pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.948481  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5vc66" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.143717  385781 request.go:629] Waited for 195.137496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5vc66
	I0408 11:36:42.143794  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5vc66
	I0408 11:36:42.143799  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.143806  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.143810  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.147303  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:42.343754  385781 request.go:629] Waited for 195.589457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:42.343864  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:42.343869  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.343877  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.343880  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.347551  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:42.348130  385781 pod_ready.go:92] pod "kube-proxy-5vc66" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:42.348153  385781 pod_ready.go:81] duration metric: took 399.662514ms for pod "kube-proxy-5vc66" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.348166  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v98zm" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.544200  385781 request.go:629] Waited for 195.950833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v98zm
	I0408 11:36:42.544286  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v98zm
	I0408 11:36:42.544292  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.544302  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.544309  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.548402  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:42.744504  385781 request.go:629] Waited for 195.398875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:42.744603  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:42.744613  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.744622  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.744627  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.748502  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:42.749324  385781 pod_ready.go:92] pod "kube-proxy-v98zm" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:42.749352  385781 pod_ready.go:81] duration metric: took 401.175152ms for pod "kube-proxy-v98zm" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.749365  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.944443  385781 request.go:629] Waited for 194.973445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604
	I0408 11:36:42.944547  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604
	I0408 11:36:42.944561  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.944571  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.944578  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.948915  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:43.143974  385781 request.go:629] Waited for 194.38792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:43.144056  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:43.144063  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.144072  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.144078  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.147512  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:43.148209  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:43.148235  385781 pod_ready.go:81] duration metric: took 398.861276ms for pod "kube-scheduler-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:43.148250  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:43.344310  385781 request.go:629] Waited for 195.952368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m02
	I0408 11:36:43.344377  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m02
	I0408 11:36:43.344384  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.344391  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.344396  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.348251  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:43.544502  385781 request.go:629] Waited for 195.28393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:43.544570  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:43.544574  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.544583  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.544588  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.548549  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:43.549219  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:43.549237  385781 pod_ready.go:81] duration metric: took 400.978745ms for pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:43.549253  385781 pod_ready.go:38] duration metric: took 8.800894972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:36:43.549279  385781 api_server.go:52] waiting for apiserver process to appear ...
	I0408 11:36:43.549343  385781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:36:43.567277  385781 api_server.go:72] duration metric: took 15.507562921s to wait for apiserver process to appear ...
	I0408 11:36:43.567306  385781 api_server.go:88] waiting for apiserver healthz status ...
	I0408 11:36:43.567328  385781 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I0408 11:36:43.572315  385781 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I0408 11:36:43.572420  385781 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I0408 11:36:43.572432  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.572440  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.572445  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.573606  385781 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0408 11:36:43.573737  385781 api_server.go:141] control plane version: v1.29.3
	I0408 11:36:43.573764  385781 api_server.go:131] duration metric: took 6.450273ms to wait for apiserver health ...
	I0408 11:36:43.573776  385781 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 11:36:43.744235  385781 request.go:629] Waited for 170.361884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:43.744324  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:43.744332  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.744342  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.744349  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.752886  385781 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0408 11:36:43.758544  385781 system_pods.go:59] 17 kube-system pods found
	I0408 11:36:43.758587  385781 system_pods.go:61] "coredns-76f75df574-7gpzq" [761f75a2-9b79-4c09-b91d-5b031e0688d4] Running
	I0408 11:36:43.758594  385781 system_pods.go:61] "coredns-76f75df574-wqrvc" [39dd6d41-947e-4b4f-85a8-99fd88d1f4d0] Running
	I0408 11:36:43.758599  385781 system_pods.go:61] "etcd-ha-438604" [6bc23e20-ec88-43c2-9f22-27b0f35e324d] Running
	I0408 11:36:43.758604  385781 system_pods.go:61] "etcd-ha-438604-m02" [f1ef4ec0-f88d-4761-818a-0bc965fba5b5] Running
	I0408 11:36:43.758609  385781 system_pods.go:61] "kindnet-82krw" [8d313f06-523c-446e-a047-640980b34c0e] Running
	I0408 11:36:43.758613  385781 system_pods.go:61] "kindnet-b5ztk" [dbc5294d-91d1-4ed1-880a-963b794e15b8] Running
	I0408 11:36:43.758617  385781 system_pods.go:61] "kube-apiserver-ha-438604" [afa3acb0-ad88-4559-a03d-389e2e954808] Running
	I0408 11:36:43.758622  385781 system_pods.go:61] "kube-apiserver-ha-438604-m02" [85c68c5f-7b60-4f96-aba0-26ea6fc4541f] Running
	I0408 11:36:43.758630  385781 system_pods.go:61] "kube-controller-manager-ha-438604" [f0e0f607-8d5e-4e7e-9be5-953f1fe9851a] Running
	I0408 11:36:43.758636  385781 system_pods.go:61] "kube-controller-manager-ha-438604-m02" [d61b1c7c-a0bd-409a-9a22-bc3a134d326f] Running
	I0408 11:36:43.758641  385781 system_pods.go:61] "kube-proxy-5vc66" [a3ad0806-73e4-4275-8564-9972a8176fe5] Running
	I0408 11:36:43.758646  385781 system_pods.go:61] "kube-proxy-v98zm" [b430193d-b6ab-442c-8ff3-12a8f2c144b9] Running
	I0408 11:36:43.758651  385781 system_pods.go:61] "kube-scheduler-ha-438604" [5a772319-a32f-45eb-bd2e-aa7b4a3f31a5] Running
	I0408 11:36:43.758658  385781 system_pods.go:61] "kube-scheduler-ha-438604-m02" [79dc8511-c13f-4bd2-bdb1-d2db88601ef7] Running
	I0408 11:36:43.758666  385781 system_pods.go:61] "kube-vip-ha-438604" [e1ddf46e-d497-49ba-97bc-bc23a32be91a] Running
	I0408 11:36:43.758671  385781 system_pods.go:61] "kube-vip-ha-438604-m02" [6998ad26-26d5-497f-97bd-e816a17444f6] Running
	I0408 11:36:43.758677  385781 system_pods.go:61] "storage-provisioner" [46a902f5-0192-4a86-bfe4-4b4d663402c1] Running
	I0408 11:36:43.758686  385781 system_pods.go:74] duration metric: took 184.900644ms to wait for pod list to return data ...
	I0408 11:36:43.758704  385781 default_sa.go:34] waiting for default service account to be created ...
	I0408 11:36:43.944147  385781 request.go:629] Waited for 185.347535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I0408 11:36:43.944239  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I0408 11:36:43.944244  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.944251  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.944263  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.948890  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:43.949107  385781 default_sa.go:45] found service account: "default"
	I0408 11:36:43.949123  385781 default_sa.go:55] duration metric: took 190.411578ms for default service account to be created ...
	I0408 11:36:43.949133  385781 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 11:36:44.144358  385781 request.go:629] Waited for 195.129265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:44.144427  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:44.144432  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:44.144440  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:44.144445  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:44.150184  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:36:44.154262  385781 system_pods.go:86] 17 kube-system pods found
	I0408 11:36:44.154290  385781 system_pods.go:89] "coredns-76f75df574-7gpzq" [761f75a2-9b79-4c09-b91d-5b031e0688d4] Running
	I0408 11:36:44.154296  385781 system_pods.go:89] "coredns-76f75df574-wqrvc" [39dd6d41-947e-4b4f-85a8-99fd88d1f4d0] Running
	I0408 11:36:44.154300  385781 system_pods.go:89] "etcd-ha-438604" [6bc23e20-ec88-43c2-9f22-27b0f35e324d] Running
	I0408 11:36:44.154304  385781 system_pods.go:89] "etcd-ha-438604-m02" [f1ef4ec0-f88d-4761-818a-0bc965fba5b5] Running
	I0408 11:36:44.154307  385781 system_pods.go:89] "kindnet-82krw" [8d313f06-523c-446e-a047-640980b34c0e] Running
	I0408 11:36:44.154311  385781 system_pods.go:89] "kindnet-b5ztk" [dbc5294d-91d1-4ed1-880a-963b794e15b8] Running
	I0408 11:36:44.154315  385781 system_pods.go:89] "kube-apiserver-ha-438604" [afa3acb0-ad88-4559-a03d-389e2e954808] Running
	I0408 11:36:44.154319  385781 system_pods.go:89] "kube-apiserver-ha-438604-m02" [85c68c5f-7b60-4f96-aba0-26ea6fc4541f] Running
	I0408 11:36:44.154323  385781 system_pods.go:89] "kube-controller-manager-ha-438604" [f0e0f607-8d5e-4e7e-9be5-953f1fe9851a] Running
	I0408 11:36:44.154327  385781 system_pods.go:89] "kube-controller-manager-ha-438604-m02" [d61b1c7c-a0bd-409a-9a22-bc3a134d326f] Running
	I0408 11:36:44.154331  385781 system_pods.go:89] "kube-proxy-5vc66" [a3ad0806-73e4-4275-8564-9972a8176fe5] Running
	I0408 11:36:44.154334  385781 system_pods.go:89] "kube-proxy-v98zm" [b430193d-b6ab-442c-8ff3-12a8f2c144b9] Running
	I0408 11:36:44.154338  385781 system_pods.go:89] "kube-scheduler-ha-438604" [5a772319-a32f-45eb-bd2e-aa7b4a3f31a5] Running
	I0408 11:36:44.154342  385781 system_pods.go:89] "kube-scheduler-ha-438604-m02" [79dc8511-c13f-4bd2-bdb1-d2db88601ef7] Running
	I0408 11:36:44.154346  385781 system_pods.go:89] "kube-vip-ha-438604" [e1ddf46e-d497-49ba-97bc-bc23a32be91a] Running
	I0408 11:36:44.154350  385781 system_pods.go:89] "kube-vip-ha-438604-m02" [6998ad26-26d5-497f-97bd-e816a17444f6] Running
	I0408 11:36:44.154353  385781 system_pods.go:89] "storage-provisioner" [46a902f5-0192-4a86-bfe4-4b4d663402c1] Running
	I0408 11:36:44.154359  385781 system_pods.go:126] duration metric: took 205.221822ms to wait for k8s-apps to be running ...
	I0408 11:36:44.154379  385781 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 11:36:44.154425  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:36:44.173282  385781 system_svc.go:56] duration metric: took 18.891908ms WaitForService to wait for kubelet
	I0408 11:36:44.173312  385781 kubeadm.go:576] duration metric: took 16.113606667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:36:44.173332  385781 node_conditions.go:102] verifying NodePressure condition ...
	I0408 11:36:44.343651  385781 request.go:629] Waited for 170.234097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I0408 11:36:44.343767  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I0408 11:36:44.343772  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:44.343780  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:44.343785  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:44.347851  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:44.348634  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:36:44.348683  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:36:44.348696  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:36:44.348699  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:36:44.348704  385781 node_conditions.go:105] duration metric: took 175.367276ms to run NodePressure ...
	I0408 11:36:44.348719  385781 start.go:240] waiting for startup goroutines ...
	I0408 11:36:44.348749  385781 start.go:254] writing updated cluster config ...
	I0408 11:36:44.350948  385781 out.go:177] 
	I0408 11:36:44.352496  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:36:44.352594  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:36:44.354576  385781 out.go:177] * Starting "ha-438604-m03" control-plane node in "ha-438604" cluster
	I0408 11:36:44.355714  385781 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:36:44.355745  385781 cache.go:56] Caching tarball of preloaded images
	I0408 11:36:44.355855  385781 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:36:44.355869  385781 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:36:44.355963  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:36:44.356132  385781 start.go:360] acquireMachinesLock for ha-438604-m03: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:36:44.356174  385781 start.go:364] duration metric: took 22.618µs to acquireMachinesLock for "ha-438604-m03"
	I0408 11:36:44.356191  385781 start.go:93] Provisioning new machine with config: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:36:44.356279  385781 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0408 11:36:44.357958  385781 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:36:44.358060  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:36:44.358096  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:36:44.373560  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I0408 11:36:44.374113  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:36:44.374622  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:36:44.374645  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:36:44.375022  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:36:44.375234  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetMachineName
	I0408 11:36:44.375398  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:36:44.375601  385781 start.go:159] libmachine.API.Create for "ha-438604" (driver="kvm2")
	I0408 11:36:44.375640  385781 client.go:168] LocalClient.Create starting
	I0408 11:36:44.375700  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 11:36:44.375747  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:36:44.375770  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:36:44.375843  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 11:36:44.375868  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:36:44.375882  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:36:44.375911  385781 main.go:141] libmachine: Running pre-create checks...
	I0408 11:36:44.375923  385781 main.go:141] libmachine: (ha-438604-m03) Calling .PreCreateCheck
	I0408 11:36:44.376135  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetConfigRaw
	I0408 11:36:44.376529  385781 main.go:141] libmachine: Creating machine...
	I0408 11:36:44.376544  385781 main.go:141] libmachine: (ha-438604-m03) Calling .Create
	I0408 11:36:44.376708  385781 main.go:141] libmachine: (ha-438604-m03) Creating KVM machine...
	I0408 11:36:44.378138  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found existing default KVM network
	I0408 11:36:44.378335  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found existing private KVM network mk-ha-438604
	I0408 11:36:44.378520  385781 main.go:141] libmachine: (ha-438604-m03) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03 ...
	I0408 11:36:44.378552  385781 main.go:141] libmachine: (ha-438604-m03) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:36:44.378612  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:44.378446  386749 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:36:44.378698  385781 main.go:141] libmachine: (ha-438604-m03) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 11:36:44.643553  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:44.643422  386749 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa...
	I0408 11:36:44.816990  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:44.816859  386749 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/ha-438604-m03.rawdisk...
	I0408 11:36:44.817029  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Writing magic tar header
	I0408 11:36:44.817040  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Writing SSH key tar header
	I0408 11:36:44.817048  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:44.817022  386749 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03 ...
	I0408 11:36:44.817215  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03
	I0408 11:36:44.817252  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 11:36:44.817270  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03 (perms=drwx------)
	I0408 11:36:44.817283  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:36:44.817307  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 11:36:44.817321  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 11:36:44.817331  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 11:36:44.817344  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 11:36:44.817356  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins
	I0408 11:36:44.817367  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home
	I0408 11:36:44.817379  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Skipping /home - not owner
	I0408 11:36:44.817412  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 11:36:44.817434  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 11:36:44.817451  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 11:36:44.817464  385781 main.go:141] libmachine: (ha-438604-m03) Creating domain...
	I0408 11:36:44.818263  385781 main.go:141] libmachine: (ha-438604-m03) define libvirt domain using xml: 
	I0408 11:36:44.818279  385781 main.go:141] libmachine: (ha-438604-m03) <domain type='kvm'>
	I0408 11:36:44.818289  385781 main.go:141] libmachine: (ha-438604-m03)   <name>ha-438604-m03</name>
	I0408 11:36:44.818296  385781 main.go:141] libmachine: (ha-438604-m03)   <memory unit='MiB'>2200</memory>
	I0408 11:36:44.818305  385781 main.go:141] libmachine: (ha-438604-m03)   <vcpu>2</vcpu>
	I0408 11:36:44.818311  385781 main.go:141] libmachine: (ha-438604-m03)   <features>
	I0408 11:36:44.818318  385781 main.go:141] libmachine: (ha-438604-m03)     <acpi/>
	I0408 11:36:44.818326  385781 main.go:141] libmachine: (ha-438604-m03)     <apic/>
	I0408 11:36:44.818332  385781 main.go:141] libmachine: (ha-438604-m03)     <pae/>
	I0408 11:36:44.818346  385781 main.go:141] libmachine: (ha-438604-m03)     
	I0408 11:36:44.818353  385781 main.go:141] libmachine: (ha-438604-m03)   </features>
	I0408 11:36:44.818360  385781 main.go:141] libmachine: (ha-438604-m03)   <cpu mode='host-passthrough'>
	I0408 11:36:44.818392  385781 main.go:141] libmachine: (ha-438604-m03)   
	I0408 11:36:44.818420  385781 main.go:141] libmachine: (ha-438604-m03)   </cpu>
	I0408 11:36:44.818436  385781 main.go:141] libmachine: (ha-438604-m03)   <os>
	I0408 11:36:44.818449  385781 main.go:141] libmachine: (ha-438604-m03)     <type>hvm</type>
	I0408 11:36:44.818463  385781 main.go:141] libmachine: (ha-438604-m03)     <boot dev='cdrom'/>
	I0408 11:36:44.818474  385781 main.go:141] libmachine: (ha-438604-m03)     <boot dev='hd'/>
	I0408 11:36:44.818491  385781 main.go:141] libmachine: (ha-438604-m03)     <bootmenu enable='no'/>
	I0408 11:36:44.818502  385781 main.go:141] libmachine: (ha-438604-m03)   </os>
	I0408 11:36:44.818512  385781 main.go:141] libmachine: (ha-438604-m03)   <devices>
	I0408 11:36:44.818524  385781 main.go:141] libmachine: (ha-438604-m03)     <disk type='file' device='cdrom'>
	I0408 11:36:44.818552  385781 main.go:141] libmachine: (ha-438604-m03)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/boot2docker.iso'/>
	I0408 11:36:44.818570  385781 main.go:141] libmachine: (ha-438604-m03)       <target dev='hdc' bus='scsi'/>
	I0408 11:36:44.818578  385781 main.go:141] libmachine: (ha-438604-m03)       <readonly/>
	I0408 11:36:44.818586  385781 main.go:141] libmachine: (ha-438604-m03)     </disk>
	I0408 11:36:44.818603  385781 main.go:141] libmachine: (ha-438604-m03)     <disk type='file' device='disk'>
	I0408 11:36:44.818612  385781 main.go:141] libmachine: (ha-438604-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 11:36:44.818620  385781 main.go:141] libmachine: (ha-438604-m03)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/ha-438604-m03.rawdisk'/>
	I0408 11:36:44.818628  385781 main.go:141] libmachine: (ha-438604-m03)       <target dev='hda' bus='virtio'/>
	I0408 11:36:44.818633  385781 main.go:141] libmachine: (ha-438604-m03)     </disk>
	I0408 11:36:44.818641  385781 main.go:141] libmachine: (ha-438604-m03)     <interface type='network'>
	I0408 11:36:44.818647  385781 main.go:141] libmachine: (ha-438604-m03)       <source network='mk-ha-438604'/>
	I0408 11:36:44.818652  385781 main.go:141] libmachine: (ha-438604-m03)       <model type='virtio'/>
	I0408 11:36:44.818659  385781 main.go:141] libmachine: (ha-438604-m03)     </interface>
	I0408 11:36:44.818664  385781 main.go:141] libmachine: (ha-438604-m03)     <interface type='network'>
	I0408 11:36:44.818669  385781 main.go:141] libmachine: (ha-438604-m03)       <source network='default'/>
	I0408 11:36:44.818674  385781 main.go:141] libmachine: (ha-438604-m03)       <model type='virtio'/>
	I0408 11:36:44.818680  385781 main.go:141] libmachine: (ha-438604-m03)     </interface>
	I0408 11:36:44.818690  385781 main.go:141] libmachine: (ha-438604-m03)     <serial type='pty'>
	I0408 11:36:44.818710  385781 main.go:141] libmachine: (ha-438604-m03)       <target port='0'/>
	I0408 11:36:44.818726  385781 main.go:141] libmachine: (ha-438604-m03)     </serial>
	I0408 11:36:44.818739  385781 main.go:141] libmachine: (ha-438604-m03)     <console type='pty'>
	I0408 11:36:44.818763  385781 main.go:141] libmachine: (ha-438604-m03)       <target type='serial' port='0'/>
	I0408 11:36:44.818786  385781 main.go:141] libmachine: (ha-438604-m03)     </console>
	I0408 11:36:44.818800  385781 main.go:141] libmachine: (ha-438604-m03)     <rng model='virtio'>
	I0408 11:36:44.818815  385781 main.go:141] libmachine: (ha-438604-m03)       <backend model='random'>/dev/random</backend>
	I0408 11:36:44.818825  385781 main.go:141] libmachine: (ha-438604-m03)     </rng>
	I0408 11:36:44.818838  385781 main.go:141] libmachine: (ha-438604-m03)     
	I0408 11:36:44.818850  385781 main.go:141] libmachine: (ha-438604-m03)     
	I0408 11:36:44.818863  385781 main.go:141] libmachine: (ha-438604-m03)   </devices>
	I0408 11:36:44.818871  385781 main.go:141] libmachine: (ha-438604-m03) </domain>
	I0408 11:36:44.818886  385781 main.go:141] libmachine: (ha-438604-m03) 
	I0408 11:36:44.826308  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:b7:e6:7b in network default
	I0408 11:36:44.826831  385781 main.go:141] libmachine: (ha-438604-m03) Ensuring networks are active...
	I0408 11:36:44.826857  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:44.827673  385781 main.go:141] libmachine: (ha-438604-m03) Ensuring network default is active
	I0408 11:36:44.827996  385781 main.go:141] libmachine: (ha-438604-m03) Ensuring network mk-ha-438604 is active
	I0408 11:36:44.828425  385781 main.go:141] libmachine: (ha-438604-m03) Getting domain xml...
	I0408 11:36:44.829240  385781 main.go:141] libmachine: (ha-438604-m03) Creating domain...
	I0408 11:36:46.057797  385781 main.go:141] libmachine: (ha-438604-m03) Waiting to get IP...
	I0408 11:36:46.058891  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:46.059419  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:46.059470  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:46.059409  386749 retry.go:31] will retry after 229.460449ms: waiting for machine to come up
	I0408 11:36:46.290968  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:46.291521  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:46.291552  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:46.291461  386749 retry.go:31] will retry after 307.284768ms: waiting for machine to come up
	I0408 11:36:46.601546  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:46.602083  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:46.602120  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:46.602020  386749 retry.go:31] will retry after 327.627325ms: waiting for machine to come up
	I0408 11:36:46.931454  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:46.932038  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:46.932071  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:46.931977  386749 retry.go:31] will retry after 561.835462ms: waiting for machine to come up
	I0408 11:36:47.495895  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:47.496380  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:47.496411  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:47.496323  386749 retry.go:31] will retry after 576.910228ms: waiting for machine to come up
	I0408 11:36:48.075195  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:48.075642  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:48.075669  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:48.075597  386749 retry.go:31] will retry after 903.152639ms: waiting for machine to come up
	I0408 11:36:48.980395  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:48.980909  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:48.980940  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:48.980858  386749 retry.go:31] will retry after 729.415904ms: waiting for machine to come up
	I0408 11:36:49.712423  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:49.712861  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:49.712894  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:49.712804  386749 retry.go:31] will retry after 1.330546456s: waiting for machine to come up
	I0408 11:36:51.044838  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:51.045340  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:51.045365  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:51.045301  386749 retry.go:31] will retry after 1.572213961s: waiting for machine to come up
	I0408 11:36:52.620114  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:52.620704  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:52.620738  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:52.620664  386749 retry.go:31] will retry after 1.486096453s: waiting for machine to come up
	I0408 11:36:54.109491  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:54.110034  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:54.110066  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:54.109972  386749 retry.go:31] will retry after 2.645739084s: waiting for machine to come up
	I0408 11:36:56.757778  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:56.758368  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:56.758401  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:56.758295  386749 retry.go:31] will retry after 3.332565363s: waiting for machine to come up
	I0408 11:37:00.092561  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:00.093016  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:37:00.093049  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:37:00.092944  386749 retry.go:31] will retry after 3.296166589s: waiting for machine to come up
	I0408 11:37:03.393531  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:03.393975  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:37:03.394000  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:37:03.393924  386749 retry.go:31] will retry after 4.35483244s: waiting for machine to come up
	I0408 11:37:07.750339  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.750804  385781 main.go:141] libmachine: (ha-438604-m03) Found IP for machine: 192.168.39.94
	I0408 11:37:07.750840  385781 main.go:141] libmachine: (ha-438604-m03) Reserving static IP address...
	I0408 11:37:07.750853  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has current primary IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.751356  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find host DHCP lease matching {name: "ha-438604-m03", mac: "52:54:00:fa:7c:74", ip: "192.168.39.94"} in network mk-ha-438604
	I0408 11:37:07.840885  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Getting to WaitForSSH function...
	I0408 11:37:07.840923  385781 main.go:141] libmachine: (ha-438604-m03) Reserved static IP address: 192.168.39.94
	I0408 11:37:07.840938  385781 main.go:141] libmachine: (ha-438604-m03) Waiting for SSH to be available...
	I0408 11:37:07.844040  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.844579  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:07.844614  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.844821  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Using SSH client type: external
	I0408 11:37:07.844854  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa (-rw-------)
	I0408 11:37:07.844890  385781 main.go:141] libmachine: (ha-438604-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 11:37:07.844910  385781 main.go:141] libmachine: (ha-438604-m03) DBG | About to run SSH command:
	I0408 11:37:07.844931  385781 main.go:141] libmachine: (ha-438604-m03) DBG | exit 0
	I0408 11:37:07.975976  385781 main.go:141] libmachine: (ha-438604-m03) DBG | SSH cmd err, output: <nil>: 
	I0408 11:37:07.976259  385781 main.go:141] libmachine: (ha-438604-m03) KVM machine creation complete!
	I0408 11:37:07.976640  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetConfigRaw
	I0408 11:37:07.977212  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:07.977449  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:07.977639  385781 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 11:37:07.977652  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:37:07.978945  385781 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 11:37:07.978972  385781 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 11:37:07.978993  385781 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 11:37:07.979004  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:07.981555  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.981934  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:07.981964  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.982168  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:07.982360  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:07.982580  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:07.982737  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:07.982952  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:07.983277  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:07.983293  385781 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 11:37:08.095435  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:37:08.095458  385781 main.go:141] libmachine: Detecting the provisioner...
	I0408 11:37:08.095466  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.098194  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.098548  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.098581  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.098727  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.098972  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.099174  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.099345  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.099506  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:08.099720  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:08.099733  385781 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 11:37:08.217134  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 11:37:08.217235  385781 main.go:141] libmachine: found compatible host: buildroot
	I0408 11:37:08.217254  385781 main.go:141] libmachine: Provisioning with buildroot...
	I0408 11:37:08.217269  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetMachineName
	I0408 11:37:08.217685  385781 buildroot.go:166] provisioning hostname "ha-438604-m03"
	I0408 11:37:08.217714  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetMachineName
	I0408 11:37:08.217960  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.220587  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.221036  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.221062  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.221207  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.221485  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.221693  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.221878  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.222065  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:08.222294  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:08.222311  385781 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-438604-m03 && echo "ha-438604-m03" | sudo tee /etc/hostname
	I0408 11:37:08.352555  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604-m03
	
	I0408 11:37:08.352592  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.355632  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.356068  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.356093  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.356293  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.356525  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.356690  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.356874  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.357051  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:08.357266  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:08.357290  385781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-438604-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-438604-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-438604-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:37:08.479375  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:37:08.479424  385781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:37:08.479446  385781 buildroot.go:174] setting up certificates
	I0408 11:37:08.479458  385781 provision.go:84] configureAuth start
	I0408 11:37:08.479472  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetMachineName
	I0408 11:37:08.479799  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:37:08.482989  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.483383  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.483422  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.483585  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.485698  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.486004  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.486034  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.486202  385781 provision.go:143] copyHostCerts
	I0408 11:37:08.486239  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:37:08.486272  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 11:37:08.486281  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:37:08.486366  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:37:08.486441  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:37:08.486458  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 11:37:08.486465  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:37:08.486486  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:37:08.486531  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:37:08.486554  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 11:37:08.486562  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:37:08.486586  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:37:08.486643  385781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.ha-438604-m03 san=[127.0.0.1 192.168.39.94 ha-438604-m03 localhost minikube]
	I0408 11:37:08.592303  385781 provision.go:177] copyRemoteCerts
	I0408 11:37:08.592372  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:37:08.592406  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.595262  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.595748  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.595786  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.595992  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.596254  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.596430  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.596621  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:37:08.687708  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 11:37:08.687789  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:37:08.715553  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 11:37:08.715634  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 11:37:08.745648  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 11:37:08.745722  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 11:37:08.773099  385781 provision.go:87] duration metric: took 293.624604ms to configureAuth
	I0408 11:37:08.773142  385781 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:37:08.773371  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:37:08.773452  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.776051  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.776430  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.776461  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.776720  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.776956  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.777103  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.777234  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.777466  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:08.777676  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:08.777700  385781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:37:09.056944  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:37:09.056989  385781 main.go:141] libmachine: Checking connection to Docker...
	I0408 11:37:09.057025  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetURL
	I0408 11:37:09.058445  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Using libvirt version 6000000
	I0408 11:37:09.060835  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.061248  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.061293  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.061482  385781 main.go:141] libmachine: Docker is up and running!
	I0408 11:37:09.061504  385781 main.go:141] libmachine: Reticulating splines...
	I0408 11:37:09.061518  385781 client.go:171] duration metric: took 24.685861155s to LocalClient.Create
	I0408 11:37:09.061547  385781 start.go:167] duration metric: took 24.685946543s to libmachine.API.Create "ha-438604"
	I0408 11:37:09.061560  385781 start.go:293] postStartSetup for "ha-438604-m03" (driver="kvm2")
	I0408 11:37:09.061575  385781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:37:09.061604  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.061872  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:37:09.061902  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:09.064565  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.064951  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.064985  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.065226  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:09.065442  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.065628  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:09.065802  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:37:09.156100  385781 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:37:09.160949  385781 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:37:09.160986  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:37:09.161064  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:37:09.161145  385781 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 11:37:09.161157  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 11:37:09.161256  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 11:37:09.172986  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:37:09.203536  385781 start.go:296] duration metric: took 141.959614ms for postStartSetup
	I0408 11:37:09.203612  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetConfigRaw
	I0408 11:37:09.204351  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:37:09.207273  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.207708  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.207749  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.208104  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:37:09.208335  385781 start.go:128] duration metric: took 24.852044083s to createHost
	I0408 11:37:09.208365  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:09.211104  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.211536  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.211568  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.211781  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:09.211985  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.212132  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.212303  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:09.212530  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:09.212700  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:09.212710  385781 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:37:09.325630  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576229.289336082
	
	I0408 11:37:09.325653  385781 fix.go:216] guest clock: 1712576229.289336082
	I0408 11:37:09.325661  385781 fix.go:229] Guest: 2024-04-08 11:37:09.289336082 +0000 UTC Remote: 2024-04-08 11:37:09.208348473 +0000 UTC m=+187.191397319 (delta=80.987609ms)
	I0408 11:37:09.325677  385781 fix.go:200] guest clock delta is within tolerance: 80.987609ms
	I0408 11:37:09.325684  385781 start.go:83] releasing machines lock for "ha-438604-m03", held for 24.969499516s
	I0408 11:37:09.325707  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.325974  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:37:09.328879  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.329376  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.329411  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.331924  385781 out.go:177] * Found network options:
	I0408 11:37:09.333553  385781 out.go:177]   - NO_PROXY=192.168.39.99,192.168.39.219
	W0408 11:37:09.334989  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 11:37:09.335009  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 11:37:09.335028  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.335728  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.335996  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.336117  385781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:37:09.336160  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	W0408 11:37:09.336241  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 11:37:09.336271  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 11:37:09.336347  385781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:37:09.336372  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:09.339064  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.339094  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.339500  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.339545  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.339576  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.339600  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.339741  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:09.339824  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:09.339915  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.340004  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.340020  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:09.340175  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:09.340183  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:37:09.340336  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:37:09.586150  385781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:37:09.595159  385781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:37:09.595247  385781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:37:09.616430  385781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 11:37:09.616466  385781 start.go:494] detecting cgroup driver to use...
	I0408 11:37:09.616543  385781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:37:09.637204  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:37:09.654536  385781 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:37:09.654619  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:37:09.672473  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:37:09.687985  385781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:37:09.815363  385781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:37:09.954588  385781 docker.go:233] disabling docker service ...
	I0408 11:37:09.954680  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:37:09.972200  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:37:09.987847  385781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:37:10.136313  385781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:37:10.280740  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:37:10.297553  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:37:10.319544  385781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:37:10.319607  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.331398  385781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:37:10.331476  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.343549  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.355505  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.367389  385781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:37:10.379207  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.390490  385781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.411310  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.423526  385781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:37:10.434358  385781 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 11:37:10.434465  385781 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 11:37:10.448911  385781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:37:10.460213  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:37:10.603877  385781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:37:10.771770  385781 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:37:10.771855  385781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:37:10.777131  385781 start.go:562] Will wait 60s for crictl version
	I0408 11:37:10.777207  385781 ssh_runner.go:195] Run: which crictl
	I0408 11:37:10.781382  385781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:37:10.820531  385781 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:37:10.820611  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:37:10.851504  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:37:10.885901  385781 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:37:10.887559  385781 out.go:177]   - env NO_PROXY=192.168.39.99
	I0408 11:37:10.888895  385781 out.go:177]   - env NO_PROXY=192.168.39.99,192.168.39.219
	I0408 11:37:10.890184  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:37:10.893382  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:10.893804  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:10.893837  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:10.894084  385781 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:37:10.898729  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:37:10.913466  385781 mustload.go:65] Loading cluster: ha-438604
	I0408 11:37:10.913734  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:37:10.913983  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:37:10.914026  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:37:10.930307  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0408 11:37:10.930770  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:37:10.931305  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:37:10.931321  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:37:10.931677  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:37:10.931927  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:37:10.933537  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:37:10.933822  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:37:10.933866  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:37:10.949890  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0408 11:37:10.950379  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:37:10.950915  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:37:10.950941  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:37:10.951324  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:37:10.951606  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:37:10.951834  385781 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604 for IP: 192.168.39.94
	I0408 11:37:10.951850  385781 certs.go:194] generating shared ca certs ...
	I0408 11:37:10.951871  385781 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:37:10.952015  385781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:37:10.952055  385781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:37:10.952066  385781 certs.go:256] generating profile certs ...
	I0408 11:37:10.952133  385781 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key
	I0408 11:37:10.952159  385781 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.d8c70499
	I0408 11:37:10.952175  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.d8c70499 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.219 192.168.39.94 192.168.39.254]
	I0408 11:37:11.146003  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.d8c70499 ...
	I0408 11:37:11.146038  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.d8c70499: {Name:mk0ea8c01c5a8fbfaf8fbdffa60e8eddbdccc24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:37:11.146217  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.d8c70499 ...
	I0408 11:37:11.146230  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.d8c70499: {Name:mk7ae3a704ce00bc3504ab883d6549f49766f91e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:37:11.146295  385781 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.d8c70499 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt
	I0408 11:37:11.146423  385781 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.d8c70499 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key
	I0408 11:37:11.146584  385781 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key
	I0408 11:37:11.146603  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 11:37:11.146616  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 11:37:11.146628  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 11:37:11.146642  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 11:37:11.146654  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 11:37:11.146664  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 11:37:11.146675  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 11:37:11.146684  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 11:37:11.146729  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 11:37:11.146760  385781 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 11:37:11.146769  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:37:11.146790  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:37:11.146814  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:37:11.146835  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:37:11.146873  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:37:11.146898  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:37:11.146911  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 11:37:11.146925  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 11:37:11.146960  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:37:11.150357  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:37:11.150720  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:37:11.150757  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:37:11.151022  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:37:11.151258  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:37:11.151452  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:37:11.151631  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:37:11.236259  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0408 11:37:11.241957  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0408 11:37:11.254270  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0408 11:37:11.259970  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0408 11:37:11.274475  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0408 11:37:11.279780  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0408 11:37:11.292418  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0408 11:37:11.297150  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0408 11:37:11.308254  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0408 11:37:11.313162  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0408 11:37:11.324294  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0408 11:37:11.329082  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0408 11:37:11.341906  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:37:11.372757  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:37:11.401159  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:37:11.431526  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:37:11.460059  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0408 11:37:11.492587  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 11:37:11.521088  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:37:11.548892  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:37:11.580086  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:37:11.612454  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 11:37:11.641263  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 11:37:11.670808  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0408 11:37:11.692240  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0408 11:37:11.712198  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0408 11:37:11.732399  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0408 11:37:11.751540  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0408 11:37:11.771024  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0408 11:37:11.792000  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0408 11:37:11.811530  385781 ssh_runner.go:195] Run: openssl version
	I0408 11:37:11.818828  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 11:37:11.831849  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 11:37:11.836953  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 11:37:11.837044  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 11:37:11.843886  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 11:37:11.855839  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:37:11.867984  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:37:11.873242  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:37:11.873327  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:37:11.879807  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:37:11.892409  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 11:37:11.905694  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 11:37:11.911142  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 11:37:11.911222  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 11:37:11.917642  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 11:37:11.929996  385781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:37:11.934738  385781 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 11:37:11.934811  385781 kubeadm.go:928] updating node {m03 192.168.39.94 8443 v1.29.3 crio true true} ...
	I0408 11:37:11.934917  385781 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-438604-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:37:11.934950  385781 kube-vip.go:111] generating kube-vip config ...
	I0408 11:37:11.935004  385781 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 11:37:11.953628  385781 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0408 11:37:11.953708  385781 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 11:37:11.953764  385781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:37:11.964496  385781 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0408 11:37:11.964566  385781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0408 11:37:11.975539  385781 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0408 11:37:11.975580  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0408 11:37:11.975585  385781 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0408 11:37:11.975603  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0408 11:37:11.975607  385781 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0408 11:37:11.975661  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:37:11.975666  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0408 11:37:11.975666  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0408 11:37:11.980908  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0408 11:37:11.980957  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0408 11:37:12.023167  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0408 11:37:12.023175  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0408 11:37:12.023262  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0408 11:37:12.023295  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0408 11:37:12.065496  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0408 11:37:12.065542  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0408 11:37:13.022696  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0408 11:37:13.034529  385781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0408 11:37:13.053420  385781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:37:13.073781  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0408 11:37:13.093979  385781 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0408 11:37:13.098407  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:37:13.112969  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:37:13.256681  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:37:13.278747  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:37:13.279407  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:37:13.279489  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:37:13.296735  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0408 11:37:13.297203  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:37:13.297808  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:37:13.297836  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:37:13.298183  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:37:13.298451  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:37:13.298600  385781 start.go:316] joinCluster: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:37:13.298731  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0408 11:37:13.298746  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:37:13.301929  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:37:13.302485  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:37:13.302514  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:37:13.302735  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:37:13.302928  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:37:13.303103  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:37:13.303265  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:37:13.485408  385781 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:37:13.485506  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lvgud.zpbg0h9e2vljhkuc --discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-438604-m03 --control-plane --apiserver-advertise-address=192.168.39.94 --apiserver-bind-port=8443"
	I0408 11:37:38.723813  385781 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lvgud.zpbg0h9e2vljhkuc --discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-438604-m03 --control-plane --apiserver-advertise-address=192.168.39.94 --apiserver-bind-port=8443": (25.238270369s)
	I0408 11:37:38.723869  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0408 11:37:39.175661  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-438604-m03 minikube.k8s.io/updated_at=2024_04_08T11_37_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=ha-438604 minikube.k8s.io/primary=false
	I0408 11:37:39.316924  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-438604-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0408 11:37:39.438710  385781 start.go:318] duration metric: took 26.140100004s to joinCluster
	I0408 11:37:39.438843  385781 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:37:39.440788  385781 out.go:177] * Verifying Kubernetes components...
	I0408 11:37:39.439179  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:37:39.442451  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:37:39.664745  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:37:39.693760  385781 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:37:39.694147  385781 kapi.go:59] client config for ha-438604: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt", KeyFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key", CAFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5db80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0408 11:37:39.694269  385781 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I0408 11:37:39.694566  385781 node_ready.go:35] waiting up to 6m0s for node "ha-438604-m03" to be "Ready" ...
	I0408 11:37:39.694678  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:39.694694  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:39.694709  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:39.694715  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:39.704933  385781 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0408 11:37:40.195747  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:40.195781  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:40.195793  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:40.195798  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:40.200780  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:40.695221  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:40.695249  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:40.695258  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:40.695263  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:40.698672  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:41.195419  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:41.195447  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:41.195455  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:41.195459  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:41.203523  385781 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0408 11:37:41.694789  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:41.694822  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:41.694834  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:41.694840  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:41.700214  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:37:41.701021  385781 node_ready.go:53] node "ha-438604-m03" has status "Ready":"False"
	I0408 11:37:42.195768  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:42.195798  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:42.195810  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:42.195818  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:42.200064  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:42.695528  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:42.695558  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:42.695568  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:42.695574  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:42.700785  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:37:43.195491  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:43.195519  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:43.195531  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:43.195536  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:43.199386  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:43.695025  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:43.695122  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:43.695147  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:43.695153  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:43.699901  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:44.194848  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:44.194875  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:44.194882  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:44.194886  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:44.199180  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:44.199912  385781 node_ready.go:53] node "ha-438604-m03" has status "Ready":"False"
	I0408 11:37:44.695620  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:44.695653  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:44.695669  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:44.695675  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:44.699624  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:45.195651  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:45.195676  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:45.195698  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:45.195702  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:45.199680  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:45.694889  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:45.694917  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:45.694926  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:45.694930  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:45.698254  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.195612  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:46.195643  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.195651  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.195654  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.199847  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:46.200579  385781 node_ready.go:53] node "ha-438604-m03" has status "Ready":"False"
	I0408 11:37:46.694960  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:46.694986  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.694994  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.694998  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.698503  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.699056  385781 node_ready.go:49] node "ha-438604-m03" has status "Ready":"True"
	I0408 11:37:46.699081  385781 node_ready.go:38] duration metric: took 7.004495577s for node "ha-438604-m03" to be "Ready" ...
	I0408 11:37:46.699090  385781 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:37:46.699153  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:46.699164  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.699171  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.699175  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.706322  385781 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0408 11:37:46.713379  385781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7gpzq" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.713467  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-7gpzq
	I0408 11:37:46.713476  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.713484  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.713489  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.717087  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.717817  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:46.717836  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.717845  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.717852  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.720991  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.721480  385781 pod_ready.go:92] pod "coredns-76f75df574-7gpzq" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:46.721501  385781 pod_ready.go:81] duration metric: took 8.094867ms for pod "coredns-76f75df574-7gpzq" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.721511  385781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wqrvc" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.721584  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-wqrvc
	I0408 11:37:46.721592  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.721600  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.721608  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.725210  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.726413  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:46.726429  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.726437  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.726444  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.730156  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.730656  385781 pod_ready.go:92] pod "coredns-76f75df574-wqrvc" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:46.730675  385781 pod_ready.go:81] duration metric: took 9.158724ms for pod "coredns-76f75df574-wqrvc" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.730685  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.730742  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604
	I0408 11:37:46.730750  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.730757  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.730763  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.734889  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:46.735488  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:46.735504  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.735517  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.735521  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.739755  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:46.740815  385781 pod_ready.go:92] pod "etcd-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:46.740837  385781 pod_ready.go:81] duration metric: took 10.142816ms for pod "etcd-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.740852  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.740928  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:37:46.740942  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.740951  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.740958  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.744401  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.744944  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:46.744959  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.744967  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.744970  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.748116  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.748776  385781 pod_ready.go:92] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:46.748796  385781 pod_ready.go:81] duration metric: took 7.935841ms for pod "etcd-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.748810  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.895893  385781 request.go:629] Waited for 146.996455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:46.895997  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:46.896005  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.896025  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.896035  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.899984  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.095987  385781 request.go:629] Waited for 195.192122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.096087  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.096112  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.096129  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.096138  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.099895  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.295041  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:47.295075  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.295087  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.295093  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.299113  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.495346  385781 request.go:629] Waited for 195.401092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.495426  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.495432  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.495444  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.495449  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.499354  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.749136  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:47.749162  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.749171  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.749175  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.753091  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.895116  385781 request.go:629] Waited for 141.241107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.895208  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.895216  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.895228  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.895235  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.899545  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:48.249964  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:48.249995  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:48.250004  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:48.250011  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:48.253959  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:48.296051  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:48.296079  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:48.296088  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:48.296099  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:48.300169  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:48.749577  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:48.749601  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:48.749612  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:48.749616  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:48.753657  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:48.754501  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:48.754525  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:48.754533  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:48.754537  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:48.757990  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:48.758586  385781 pod_ready.go:102] pod "etcd-ha-438604-m03" in "kube-system" namespace has status "Ready":"False"
	I0408 11:37:49.250110  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:49.250140  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.250153  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.250158  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.253773  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:49.254408  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:49.254427  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.254435  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.254439  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.257796  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:49.258415  385781 pod_ready.go:92] pod "etcd-ha-438604-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:49.258438  385781 pod_ready.go:81] duration metric: took 2.509619072s for pod "etcd-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.258460  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.295858  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604
	I0408 11:37:49.295892  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.295904  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.295912  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.301861  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:37:49.495214  385781 request.go:629] Waited for 192.397135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:49.495285  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:49.495292  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.495304  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.495308  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.499567  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:49.500461  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:49.500486  385781 pod_ready.go:81] duration metric: took 242.01305ms for pod "kube-apiserver-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.500497  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.695924  385781 request.go:629] Waited for 195.350089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m02
	I0408 11:37:49.696036  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m02
	I0408 11:37:49.696049  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.696060  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.696071  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.699467  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:49.895975  385781 request.go:629] Waited for 195.365088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:49.896059  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:49.896065  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.896076  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.896086  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.899934  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:49.900897  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:49.900923  385781 pod_ready.go:81] duration metric: took 400.417819ms for pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.900997  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.095906  385781 request.go:629] Waited for 194.787366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m03
	I0408 11:37:50.095970  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m03
	I0408 11:37:50.095976  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.095984  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.095988  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.100156  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:50.295923  385781 request.go:629] Waited for 195.000072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:50.296002  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:50.296008  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.296016  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.296021  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.299630  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:50.300544  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:50.300568  385781 pod_ready.go:81] duration metric: took 399.550906ms for pod "kube-apiserver-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.300580  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.495762  385781 request.go:629] Waited for 195.094865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604
	I0408 11:37:50.495848  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604
	I0408 11:37:50.495854  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.495861  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.495866  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.499793  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:50.695931  385781 request.go:629] Waited for 195.307388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:50.696014  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:50.696022  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.696033  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.696049  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.699441  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:50.700455  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:50.700489  385781 pod_ready.go:81] duration metric: took 399.900475ms for pod "kube-controller-manager-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.700516  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.895380  385781 request.go:629] Waited for 194.755754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m02
	I0408 11:37:50.895463  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m02
	I0408 11:37:50.895468  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.895476  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.895484  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.899901  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:51.095980  385781 request.go:629] Waited for 195.181145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:51.096058  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:51.096065  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.096080  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.096091  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.099664  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:51.100294  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:51.100317  385781 pod_ready.go:81] duration metric: took 399.791343ms for pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:51.100331  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:51.295933  385781 request.go:629] Waited for 195.501353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:51.296019  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:51.296029  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.296042  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.296055  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.299759  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:51.495030  385781 request.go:629] Waited for 194.308964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:51.495102  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:51.495109  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.495118  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.495125  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.500912  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:37:51.695468  385781 request.go:629] Waited for 94.331993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:51.695561  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:51.695572  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.695583  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.695591  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.699409  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:51.895362  385781 request.go:629] Waited for 195.114116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:51.895440  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:51.895445  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.895452  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.895457  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.899519  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:52.100798  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:52.100824  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:52.100832  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:52.100836  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:52.105071  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:52.296055  385781 request.go:629] Waited for 190.055532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:52.296126  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:52.296131  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:52.296139  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:52.296146  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:52.299846  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:52.600940  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:52.600973  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:52.600983  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:52.600989  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:52.605306  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:52.695330  385781 request.go:629] Waited for 89.268812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:52.695446  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:52.695461  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:52.695472  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:52.695479  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:52.699541  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:53.101280  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:53.101306  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:53.101314  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:53.101318  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:53.105893  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:53.106693  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:53.106719  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:53.106727  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:53.106732  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:53.109962  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:53.110538  385781 pod_ready.go:102] pod "kube-controller-manager-ha-438604-m03" in "kube-system" namespace has status "Ready":"False"
	I0408 11:37:53.601512  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:53.601538  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:53.601546  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:53.601550  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:53.608614  385781 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0408 11:37:53.609258  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:53.609276  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:53.609284  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:53.609288  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:53.612747  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.100936  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:54.100962  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.100971  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.100975  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.104735  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.105410  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:54.105429  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.105436  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.105442  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.108934  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.109589  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:54.109611  385781 pod_ready.go:81] duration metric: took 3.009273352s for pod "kube-controller-manager-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.109629  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5vc66" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.109692  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5vc66
	I0408 11:37:54.109700  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.109707  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.109712  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.113136  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.295127  385781 request.go:629] Waited for 181.216713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:54.295205  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:54.295215  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.295229  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.295236  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.299059  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.299985  385781 pod_ready.go:92] pod "kube-proxy-5vc66" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:54.300019  385781 pod_ready.go:81] duration metric: took 190.37877ms for pod "kube-proxy-5vc66" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.300031  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pcbq6" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.495509  385781 request.go:629] Waited for 195.397939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcbq6
	I0408 11:37:54.495608  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcbq6
	I0408 11:37:54.495622  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.495630  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.495634  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.499780  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:54.695453  385781 request.go:629] Waited for 194.921573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:54.695553  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:54.695565  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.695579  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.695586  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.698943  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.699650  385781 pod_ready.go:92] pod "kube-proxy-pcbq6" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:54.699674  385781 pod_ready.go:81] duration metric: took 399.635169ms for pod "kube-proxy-pcbq6" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.699707  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v98zm" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.895813  385781 request.go:629] Waited for 196.022595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v98zm
	I0408 11:37:54.895923  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v98zm
	I0408 11:37:54.895933  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.895940  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.895944  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.899759  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:55.095933  385781 request.go:629] Waited for 195.398867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:55.096018  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:55.096025  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.096035  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.096044  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.099980  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:55.100822  385781 pod_ready.go:92] pod "kube-proxy-v98zm" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:55.100845  385781 pod_ready.go:81] duration metric: took 401.128262ms for pod "kube-proxy-v98zm" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.100862  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.295961  385781 request.go:629] Waited for 195.008095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604
	I0408 11:37:55.296058  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604
	I0408 11:37:55.296064  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.296071  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.296075  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.300155  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:55.495389  385781 request.go:629] Waited for 194.373056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:55.495460  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:55.495465  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.495472  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.495477  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.499329  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:55.500126  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:55.500150  385781 pod_ready.go:81] duration metric: took 399.277428ms for pod "kube-scheduler-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.500161  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.695158  385781 request.go:629] Waited for 194.909862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m02
	I0408 11:37:55.695232  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m02
	I0408 11:37:55.695238  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.695243  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.695247  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.699042  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:55.895380  385781 request.go:629] Waited for 195.416353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:55.895475  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:55.895484  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.895493  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.895500  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.899678  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:55.900234  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:55.900255  385781 pod_ready.go:81] duration metric: took 400.086899ms for pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.900265  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:56.095438  385781 request.go:629] Waited for 195.060495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m03
	I0408 11:37:56.095512  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m03
	I0408 11:37:56.095517  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.095524  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.095529  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.099919  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:56.295002  385781 request.go:629] Waited for 193.443696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:56.295125  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:56.295138  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.295148  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.295158  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.299096  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:56.299906  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:56.299931  385781 pod_ready.go:81] duration metric: took 399.658719ms for pod "kube-scheduler-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:56.299947  385781 pod_ready.go:38] duration metric: took 9.600847352s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:37:56.299975  385781 api_server.go:52] waiting for apiserver process to appear ...
	I0408 11:37:56.300050  385781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:37:56.316890  385781 api_server.go:72] duration metric: took 16.877983147s to wait for apiserver process to appear ...
	I0408 11:37:56.316924  385781 api_server.go:88] waiting for apiserver healthz status ...
	I0408 11:37:56.316952  385781 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I0408 11:37:56.323765  385781 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I0408 11:37:56.323859  385781 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I0408 11:37:56.323870  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.323882  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.323898  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.325018  385781 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0408 11:37:56.325105  385781 api_server.go:141] control plane version: v1.29.3
	I0408 11:37:56.325123  385781 api_server.go:131] duration metric: took 8.190044ms to wait for apiserver health ...
	I0408 11:37:56.325142  385781 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 11:37:56.495649  385781 request.go:629] Waited for 170.409619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:56.495731  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:56.495738  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.495745  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.495750  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.503560  385781 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0408 11:37:56.510633  385781 system_pods.go:59] 24 kube-system pods found
	I0408 11:37:56.510670  385781 system_pods.go:61] "coredns-76f75df574-7gpzq" [761f75a2-9b79-4c09-b91d-5b031e0688d4] Running
	I0408 11:37:56.510675  385781 system_pods.go:61] "coredns-76f75df574-wqrvc" [39dd6d41-947e-4b4f-85a8-99fd88d1f4d0] Running
	I0408 11:37:56.510679  385781 system_pods.go:61] "etcd-ha-438604" [6bc23e20-ec88-43c2-9f22-27b0f35e324d] Running
	I0408 11:37:56.510682  385781 system_pods.go:61] "etcd-ha-438604-m02" [f1ef4ec0-f88d-4761-818a-0bc965fba5b5] Running
	I0408 11:37:56.510686  385781 system_pods.go:61] "etcd-ha-438604-m03" [297e3d28-7d53-418e-9467-a3e167d27686] Running
	I0408 11:37:56.510689  385781 system_pods.go:61] "kindnet-82krw" [8d313f06-523c-446e-a047-640980b34c0e] Running
	I0408 11:37:56.510691  385781 system_pods.go:61] "kindnet-b5ztk" [dbc5294d-91d1-4ed1-880a-963b794e15b8] Running
	I0408 11:37:56.510694  385781 system_pods.go:61] "kindnet-dg6vt" [08b93d6c-a55d-481d-9a53-39aaab016531] Running
	I0408 11:37:56.510701  385781 system_pods.go:61] "kube-apiserver-ha-438604" [afa3acb0-ad88-4559-a03d-389e2e954808] Running
	I0408 11:37:56.510704  385781 system_pods.go:61] "kube-apiserver-ha-438604-m02" [85c68c5f-7b60-4f96-aba0-26ea6fc4541f] Running
	I0408 11:37:56.510707  385781 system_pods.go:61] "kube-apiserver-ha-438604-m03" [26bcb7f0-b36e-486f-92c5-704d8aacc4a9] Running
	I0408 11:37:56.510713  385781 system_pods.go:61] "kube-controller-manager-ha-438604" [f0e0f607-8d5e-4e7e-9be5-953f1fe9851a] Running
	I0408 11:37:56.510717  385781 system_pods.go:61] "kube-controller-manager-ha-438604-m02" [d61b1c7c-a0bd-409a-9a22-bc3a134d326f] Running
	I0408 11:37:56.510720  385781 system_pods.go:61] "kube-controller-manager-ha-438604-m03" [ac6d4002-24bc-42d7-b683-20c3e6ec248b] Running
	I0408 11:37:56.510725  385781 system_pods.go:61] "kube-proxy-5vc66" [a3ad0806-73e4-4275-8564-9972a8176fe5] Running
	I0408 11:37:56.510728  385781 system_pods.go:61] "kube-proxy-pcbq6" [0af7d53e-ffe2-4c81-8d19-ff9e103795d2] Running
	I0408 11:37:56.510734  385781 system_pods.go:61] "kube-proxy-v98zm" [b430193d-b6ab-442c-8ff3-12a8f2c144b9] Running
	I0408 11:37:56.510737  385781 system_pods.go:61] "kube-scheduler-ha-438604" [5a772319-a32f-45eb-bd2e-aa7b4a3f31a5] Running
	I0408 11:37:56.510740  385781 system_pods.go:61] "kube-scheduler-ha-438604-m02" [79dc8511-c13f-4bd2-bdb1-d2db88601ef7] Running
	I0408 11:37:56.510743  385781 system_pods.go:61] "kube-scheduler-ha-438604-m03" [de828024-561c-4f5c-b161-9071f65c9090] Running
	I0408 11:37:56.510746  385781 system_pods.go:61] "kube-vip-ha-438604" [e1ddf46e-d497-49ba-97bc-bc23a32be91a] Running
	I0408 11:37:56.510752  385781 system_pods.go:61] "kube-vip-ha-438604-m02" [6998ad26-26d5-497f-97bd-e816a17444f6] Running
	I0408 11:37:56.510754  385781 system_pods.go:61] "kube-vip-ha-438604-m03" [4c4def5d-6239-411f-9126-32118b23d25d] Running
	I0408 11:37:56.510757  385781 system_pods.go:61] "storage-provisioner" [46a902f5-0192-4a86-bfe4-4b4d663402c1] Running
	I0408 11:37:56.510764  385781 system_pods.go:74] duration metric: took 185.612947ms to wait for pod list to return data ...
	I0408 11:37:56.510774  385781 default_sa.go:34] waiting for default service account to be created ...
	I0408 11:37:56.695150  385781 request.go:629] Waited for 184.289452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I0408 11:37:56.695236  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I0408 11:37:56.695245  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.695257  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.695270  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.698252  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:37:56.698421  385781 default_sa.go:45] found service account: "default"
	I0408 11:37:56.698446  385781 default_sa.go:55] duration metric: took 187.661878ms for default service account to be created ...
	I0408 11:37:56.698459  385781 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 11:37:56.895773  385781 request.go:629] Waited for 197.220291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:56.895855  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:56.895863  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.895872  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.895877  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.903591  385781 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0408 11:37:56.910251  385781 system_pods.go:86] 24 kube-system pods found
	I0408 11:37:56.910283  385781 system_pods.go:89] "coredns-76f75df574-7gpzq" [761f75a2-9b79-4c09-b91d-5b031e0688d4] Running
	I0408 11:37:56.910288  385781 system_pods.go:89] "coredns-76f75df574-wqrvc" [39dd6d41-947e-4b4f-85a8-99fd88d1f4d0] Running
	I0408 11:37:56.910293  385781 system_pods.go:89] "etcd-ha-438604" [6bc23e20-ec88-43c2-9f22-27b0f35e324d] Running
	I0408 11:37:56.910298  385781 system_pods.go:89] "etcd-ha-438604-m02" [f1ef4ec0-f88d-4761-818a-0bc965fba5b5] Running
	I0408 11:37:56.910302  385781 system_pods.go:89] "etcd-ha-438604-m03" [297e3d28-7d53-418e-9467-a3e167d27686] Running
	I0408 11:37:56.910306  385781 system_pods.go:89] "kindnet-82krw" [8d313f06-523c-446e-a047-640980b34c0e] Running
	I0408 11:37:56.910310  385781 system_pods.go:89] "kindnet-b5ztk" [dbc5294d-91d1-4ed1-880a-963b794e15b8] Running
	I0408 11:37:56.910314  385781 system_pods.go:89] "kindnet-dg6vt" [08b93d6c-a55d-481d-9a53-39aaab016531] Running
	I0408 11:37:56.910317  385781 system_pods.go:89] "kube-apiserver-ha-438604" [afa3acb0-ad88-4559-a03d-389e2e954808] Running
	I0408 11:37:56.910321  385781 system_pods.go:89] "kube-apiserver-ha-438604-m02" [85c68c5f-7b60-4f96-aba0-26ea6fc4541f] Running
	I0408 11:37:56.910326  385781 system_pods.go:89] "kube-apiserver-ha-438604-m03" [26bcb7f0-b36e-486f-92c5-704d8aacc4a9] Running
	I0408 11:37:56.910331  385781 system_pods.go:89] "kube-controller-manager-ha-438604" [f0e0f607-8d5e-4e7e-9be5-953f1fe9851a] Running
	I0408 11:37:56.910337  385781 system_pods.go:89] "kube-controller-manager-ha-438604-m02" [d61b1c7c-a0bd-409a-9a22-bc3a134d326f] Running
	I0408 11:37:56.910344  385781 system_pods.go:89] "kube-controller-manager-ha-438604-m03" [ac6d4002-24bc-42d7-b683-20c3e6ec248b] Running
	I0408 11:37:56.910349  385781 system_pods.go:89] "kube-proxy-5vc66" [a3ad0806-73e4-4275-8564-9972a8176fe5] Running
	I0408 11:37:56.910355  385781 system_pods.go:89] "kube-proxy-pcbq6" [0af7d53e-ffe2-4c81-8d19-ff9e103795d2] Running
	I0408 11:37:56.910362  385781 system_pods.go:89] "kube-proxy-v98zm" [b430193d-b6ab-442c-8ff3-12a8f2c144b9] Running
	I0408 11:37:56.910367  385781 system_pods.go:89] "kube-scheduler-ha-438604" [5a772319-a32f-45eb-bd2e-aa7b4a3f31a5] Running
	I0408 11:37:56.910373  385781 system_pods.go:89] "kube-scheduler-ha-438604-m02" [79dc8511-c13f-4bd2-bdb1-d2db88601ef7] Running
	I0408 11:37:56.910387  385781 system_pods.go:89] "kube-scheduler-ha-438604-m03" [de828024-561c-4f5c-b161-9071f65c9090] Running
	I0408 11:37:56.910393  385781 system_pods.go:89] "kube-vip-ha-438604" [e1ddf46e-d497-49ba-97bc-bc23a32be91a] Running
	I0408 11:37:56.910396  385781 system_pods.go:89] "kube-vip-ha-438604-m02" [6998ad26-26d5-497f-97bd-e816a17444f6] Running
	I0408 11:37:56.910400  385781 system_pods.go:89] "kube-vip-ha-438604-m03" [4c4def5d-6239-411f-9126-32118b23d25d] Running
	I0408 11:37:56.910406  385781 system_pods.go:89] "storage-provisioner" [46a902f5-0192-4a86-bfe4-4b4d663402c1] Running
	I0408 11:37:56.910414  385781 system_pods.go:126] duration metric: took 211.945737ms to wait for k8s-apps to be running ...
	I0408 11:37:56.910422  385781 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 11:37:56.910482  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:37:56.926913  385781 system_svc.go:56] duration metric: took 16.478436ms WaitForService to wait for kubelet
	I0408 11:37:56.926957  385781 kubeadm.go:576] duration metric: took 17.488052693s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:37:56.926988  385781 node_conditions.go:102] verifying NodePressure condition ...
	I0408 11:37:57.095546  385781 request.go:629] Waited for 168.454408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I0408 11:37:57.095646  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I0408 11:37:57.095653  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:57.095664  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:57.095676  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:57.100191  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:57.101270  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:37:57.101292  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:37:57.101306  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:37:57.101311  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:37:57.101315  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:37:57.101320  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:37:57.101326  385781 node_conditions.go:105] duration metric: took 174.330595ms to run NodePressure ...
	I0408 11:37:57.101341  385781 start.go:240] waiting for startup goroutines ...
	I0408 11:37:57.101381  385781 start.go:254] writing updated cluster config ...
	I0408 11:37:57.101720  385781 ssh_runner.go:195] Run: rm -f paused
	I0408 11:37:57.156943  385781 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 11:37:57.159179  385781 out.go:177] * Done! kubectl is now configured to use "ha-438604" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.891358319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576490891329444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=545787ec-f240-4ec1-93b6-33b08849c2d6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.894447609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f08bb84b-bae5-4ddf-b6d5-737fa44d3bd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.896052139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f08bb84b-bae5-4ddf-b6d5-737fa44d3bd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.896356261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576281566752611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b72573fcec35080dfc800587a49f43ad76e5e413bec68aeb950487ccfc97c8f,PodSandboxId:9ee40d0739885de8948ab286b71dbf2d89418dc80826c8e917a3a8c3320cbe0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712576104286401923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104333168376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104268144313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-9
47e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654,PodSandboxId:53cebe3e8c9222630a93080649e6ed84fe546db2d2d2f625da73c1cb773826d2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576
102512467738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576102380866535,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861,PodSandboxId:557d89a392b8e2c856f1c7d3cdf070fbed5126b5c0c405914d84d5440f923b01,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576084241856409,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8d71cc26b4f451de333cbadc95adb,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576081183108650,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576081137280449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842,PodSandboxId:29c075b954bee4fb24b831041c932c844733cdc3887d1dd27a86cffd21994ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576081214828311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef,PodSandboxId:e2170626fdc1cc13f50b661d90d931004f6b1ebc0fd146041f5600617517e81d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576081109163853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f08bb84b-bae5-4ddf-b6d5-737fa44d3bd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.936160130Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bf6f580-9827-412a-b714-ded898ff7f63 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.936237118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bf6f580-9827-412a-b714-ded898ff7f63 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.937713451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=442b47a7-dcef-48db-9bb4-5a727bf29034 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.938222403Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576490938198378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=442b47a7-dcef-48db-9bb4-5a727bf29034 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.939186536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b40c437-7ca6-4cf8-bda3-f14e3182cb7e name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.939238978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b40c437-7ca6-4cf8-bda3-f14e3182cb7e name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.939487842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576281566752611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b72573fcec35080dfc800587a49f43ad76e5e413bec68aeb950487ccfc97c8f,PodSandboxId:9ee40d0739885de8948ab286b71dbf2d89418dc80826c8e917a3a8c3320cbe0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712576104286401923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104333168376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104268144313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-9
47e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654,PodSandboxId:53cebe3e8c9222630a93080649e6ed84fe546db2d2d2f625da73c1cb773826d2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576
102512467738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576102380866535,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861,PodSandboxId:557d89a392b8e2c856f1c7d3cdf070fbed5126b5c0c405914d84d5440f923b01,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576084241856409,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8d71cc26b4f451de333cbadc95adb,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576081183108650,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576081137280449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842,PodSandboxId:29c075b954bee4fb24b831041c932c844733cdc3887d1dd27a86cffd21994ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576081214828311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef,PodSandboxId:e2170626fdc1cc13f50b661d90d931004f6b1ebc0fd146041f5600617517e81d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576081109163853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b40c437-7ca6-4cf8-bda3-f14e3182cb7e name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.982927827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4446cf5-a7c4-4b33-8717-5bdd59ebaec2 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.983025220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4446cf5-a7c4-4b33-8717-5bdd59ebaec2 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.984851042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45d8b56b-a998-4fcb-96c5-3f7dac55a8b6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.985568281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576490985493780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45d8b56b-a998-4fcb-96c5-3f7dac55a8b6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.986413219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdf35b66-5ad1-4ed4-8da1-c53020a6bfd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.986486390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdf35b66-5ad1-4ed4-8da1-c53020a6bfd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:30 ha-438604 crio[685]: time="2024-04-08 11:41:30.986810915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576281566752611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b72573fcec35080dfc800587a49f43ad76e5e413bec68aeb950487ccfc97c8f,PodSandboxId:9ee40d0739885de8948ab286b71dbf2d89418dc80826c8e917a3a8c3320cbe0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712576104286401923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104333168376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104268144313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-9
47e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654,PodSandboxId:53cebe3e8c9222630a93080649e6ed84fe546db2d2d2f625da73c1cb773826d2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576
102512467738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576102380866535,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861,PodSandboxId:557d89a392b8e2c856f1c7d3cdf070fbed5126b5c0c405914d84d5440f923b01,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576084241856409,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8d71cc26b4f451de333cbadc95adb,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576081183108650,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576081137280449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842,PodSandboxId:29c075b954bee4fb24b831041c932c844733cdc3887d1dd27a86cffd21994ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576081214828311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef,PodSandboxId:e2170626fdc1cc13f50b661d90d931004f6b1ebc0fd146041f5600617517e81d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576081109163853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdf35b66-5ad1-4ed4-8da1-c53020a6bfd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:31 ha-438604 crio[685]: time="2024-04-08 11:41:31.029318930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb8014ab-6b8d-43eb-abc5-1797d9c927f3 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:41:31 ha-438604 crio[685]: time="2024-04-08 11:41:31.029412554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb8014ab-6b8d-43eb-abc5-1797d9c927f3 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:41:31 ha-438604 crio[685]: time="2024-04-08 11:41:31.030437971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8be8b5c-574f-47f9-98f6-9034de819532 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:41:31 ha-438604 crio[685]: time="2024-04-08 11:41:31.030925333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576491030902165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8be8b5c-574f-47f9-98f6-9034de819532 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:41:31 ha-438604 crio[685]: time="2024-04-08 11:41:31.031477673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbfccf68-9aef-44ad-aaca-73390e66eec0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:31 ha-438604 crio[685]: time="2024-04-08 11:41:31.031595620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbfccf68-9aef-44ad-aaca-73390e66eec0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:41:31 ha-438604 crio[685]: time="2024-04-08 11:41:31.031848255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576281566752611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b72573fcec35080dfc800587a49f43ad76e5e413bec68aeb950487ccfc97c8f,PodSandboxId:9ee40d0739885de8948ab286b71dbf2d89418dc80826c8e917a3a8c3320cbe0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712576104286401923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104333168376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104268144313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-9
47e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654,PodSandboxId:53cebe3e8c9222630a93080649e6ed84fe546db2d2d2f625da73c1cb773826d2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576
102512467738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576102380866535,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861,PodSandboxId:557d89a392b8e2c856f1c7d3cdf070fbed5126b5c0c405914d84d5440f923b01,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576084241856409,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8d71cc26b4f451de333cbadc95adb,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576081183108650,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576081137280449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842,PodSandboxId:29c075b954bee4fb24b831041c932c844733cdc3887d1dd27a86cffd21994ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576081214828311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef,PodSandboxId:e2170626fdc1cc13f50b661d90d931004f6b1ebc0fd146041f5600617517e81d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576081109163853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbfccf68-9aef-44ad-aaca-73390e66eec0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11b291bd9a246       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   76f708d0734ed       busybox-7fdf7869d9-cdh5l
	f0cafcafceece       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   acf17bfe1f043       coredns-76f75df574-7gpzq
	0b72573fcec35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   9ee40d0739885       storage-provisioner
	63c0e178c3e78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   328834ce582ca       coredns-76f75df574-wqrvc
	557462b300c32       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   53cebe3e8c922       kindnet-82krw
	a0bffd365d14f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      6 minutes ago       Running             kube-proxy                0                   ffe693490c6c3       kube-proxy-v98zm
	b2d05e909b1dd       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   557d89a392b8e       kube-vip-ha-438604
	677d8d8c878cc       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      6 minutes ago       Running             kube-apiserver            0                   29c075b954bee       kube-apiserver-ha-438604
	982252ef21b29       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      6 minutes ago       Running             kube-scheduler            0                   95acb68b16e77       kube-scheduler-ha-438604
	532fccde459b9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   681a212174e36       etcd-ha-438604
	3f52ec6258fa2       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      6 minutes ago       Running             kube-controller-manager   0                   e2170626fdc1c       kube-controller-manager-ha-438604
	
	
	==> coredns [63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938] <==
	[INFO] 10.244.1.2:35295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151396s
	[INFO] 10.244.1.2:60373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00071102s
	[INFO] 10.244.1.2:45844 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001995968s
	[INFO] 10.244.0.4:35463 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000243774s
	[INFO] 10.244.0.4:39312 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164775s
	[INFO] 10.244.2.2:45779 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188488s
	[INFO] 10.244.2.2:55046 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001548285s
	[INFO] 10.244.2.2:39734 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001381546s
	[INFO] 10.244.2.2:60648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017788s
	[INFO] 10.244.1.2:50535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012891s
	[INFO] 10.244.1.2:34893 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001689023s
	[INFO] 10.244.1.2:54572 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121059s
	[INFO] 10.244.0.4:55733 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248755s
	[INFO] 10.244.0.4:44663 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046479s
	[INFO] 10.244.2.2:43313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161932s
	[INFO] 10.244.2.2:36056 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115719s
	[INFO] 10.244.2.2:58531 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248815s
	[INFO] 10.244.1.2:40849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115353s
	[INFO] 10.244.1.2:51289 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105404s
	[INFO] 10.244.1.2:56814 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070319s
	[INFO] 10.244.0.4:35492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160626s
	[INFO] 10.244.0.4:34374 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082632s
	[INFO] 10.244.2.2:43756 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109569s
	[INFO] 10.244.2.2:45152 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124387s
	[INFO] 10.244.1.2:38830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135636s
	
	
	==> coredns [f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d] <==
	[INFO] 10.244.0.4:59845 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002631659s
	[INFO] 10.244.0.4:58127 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184769s
	[INFO] 10.244.0.4:40273 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002164716s
	[INFO] 10.244.0.4:44675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011526s
	[INFO] 10.244.0.4:52644 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122913s
	[INFO] 10.244.2.2:49571 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135207s
	[INFO] 10.244.2.2:54106 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016012s
	[INFO] 10.244.2.2:33817 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024307s
	[INFO] 10.244.2.2:53777 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096848s
	[INFO] 10.244.1.2:51257 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001970096s
	[INFO] 10.244.1.2:37927 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164729s
	[INFO] 10.244.1.2:46840 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074025s
	[INFO] 10.244.1.2:40034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116336s
	[INFO] 10.244.1.2:46524 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110431s
	[INFO] 10.244.0.4:47504 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116612s
	[INFO] 10.244.0.4:52704 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105138s
	[INFO] 10.244.2.2:40699 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000199266s
	[INFO] 10.244.1.2:46666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009956s
	[INFO] 10.244.0.4:57492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119263s
	[INFO] 10.244.0.4:45362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139004s
	[INFO] 10.244.2.2:58706 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239864s
	[INFO] 10.244.2.2:32981 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128008s
	[INFO] 10.244.1.2:38182 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167786s
	[INFO] 10.244.1.2:44324 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206004s
	[INFO] 10.244.1.2:37810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013702s
	
	
	==> describe nodes <==
	Name:               ha-438604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T11_34_48_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:34:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:41:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:38:22 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:38:22 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:38:22 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:38:22 +0000   Mon, 08 Apr 2024 11:35:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    ha-438604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d242cef9ed484660b2c31aeed7e51ff5
	  System UUID:                d242cef9-ed48-4660-b2c3-1aeed7e51ff5
	  Boot ID:                    336ee057-2212-4601-ad25-56ebfd2bc06e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-cdh5l             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 coredns-76f75df574-7gpzq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m31s
	  kube-system                 coredns-76f75df574-wqrvc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m31s
	  kube-system                 etcd-ha-438604                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m43s
	  kube-system                 kindnet-82krw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m31s
	  kube-system                 kube-apiserver-ha-438604             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-controller-manager-ha-438604    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-proxy-v98zm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-scheduler-ha-438604             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 kube-vip-ha-438604                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m43s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m51s (x7 over 6m51s)  kubelet          Node ha-438604 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m51s (x8 over 6m51s)  kubelet          Node ha-438604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m51s (x8 over 6m51s)  kubelet          Node ha-438604 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m43s                  kubelet          Node ha-438604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s                  kubelet          Node ha-438604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s                  kubelet          Node ha-438604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m32s                  node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal  NodeReady                6m28s                  kubelet          Node ha-438604 status is now: NodeReady
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	
	
	Name:               ha-438604-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_36_27_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:36:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:39:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Apr 2024 11:38:26 +0000   Mon, 08 Apr 2024 11:39:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Apr 2024 11:38:26 +0000   Mon, 08 Apr 2024 11:39:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Apr 2024 11:38:26 +0000   Mon, 08 Apr 2024 11:39:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Apr 2024 11:38:26 +0000   Mon, 08 Apr 2024 11:39:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    ha-438604-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 957d2c54c49c48d0b297f4467d1bac27
	  System UUID:                957d2c54-c49c-48d0-b297-f4467d1bac27
	  Boot ID:                    4a3bfa74-44c6-4743-beca-7f47225d1d75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jz4h9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-438604-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m7s
	  kube-system                 kindnet-b5ztk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m8s
	  kube-system                 kube-apiserver-ha-438604-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-controller-manager-ha-438604-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-proxy-5vc66                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-scheduler-ha-438604-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-vip-ha-438604-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m4s                 kube-proxy       
	  Normal  Starting                 5m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m8s (x2 over 5m8s)  kubelet          Node ha-438604-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x2 over 5m8s)  kubelet          Node ha-438604-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x2 over 5m8s)  kubelet          Node ha-438604-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  NodeReady                4m57s                kubelet          Node ha-438604-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m50s                node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  RegisteredNode           3m39s                node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  NodeNotReady             104s                 node-controller  Node ha-438604-m02 status is now: NodeNotReady
	
	
	Name:               ha-438604-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_37_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:37:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:41:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:38:05 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:38:05 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:38:05 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:38:05 +0000   Mon, 08 Apr 2024 11:37:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    ha-438604-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27ce6086b0b04606902de8def056d57d
	  System UUID:                27ce6086-b0b0-4606-902d-e8def056d57d
	  Boot ID:                    e16126c1-ef05-4bfe-9505-165bab469df6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-gk5bx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-438604-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-dg6vt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-ha-438604-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ha-438604-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-proxy-pcbq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-ha-438604-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-vip-ha-438604-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet          Node ha-438604-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet          Node ha-438604-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m57s)  kubelet          Node ha-438604-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	
	
	Name:               ha-438604-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_38_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:38:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:41:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:39:08 +0000   Mon, 08 Apr 2024 11:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:39:08 +0000   Mon, 08 Apr 2024 11:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:39:08 +0000   Mon, 08 Apr 2024 11:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:39:08 +0000   Mon, 08 Apr 2024 11:38:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-438604-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0df153c018eb4bd3bce7e2132da5651e
	  System UUID:                0df153c0-18eb-4bd3-bce7-e2132da5651e
	  Boot ID:                    d3414940-ef8a-4f31-9dec-601fdd6541e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8rrcs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-2vmwq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m54s (x3 over 2m54s)  kubelet          Node ha-438604-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x3 over 2m54s)  kubelet          Node ha-438604-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x3 over 2m54s)  kubelet          Node ha-438604-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-438604-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr 8 11:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054164] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042688] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.558260] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.775339] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.659050] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.215969] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.059868] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060056] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.165820] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.136183] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.312263] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.585051] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.065353] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.627952] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.807124] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.007059] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.588023] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[Apr 8 11:35] kauditd_printk_skb: 15 callbacks suppressed
	[Apr 8 11:36] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18] <==
	{"level":"warn","ts":"2024-04-08T11:41:31.064279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.151996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.174024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.292607Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.219:2380/version","remote-member-id":"7ff681eaaadd5fcd","error":"Get \"https://192.168.39.219:2380/version\": dial tcp 192.168.39.219:2380: i/o timeout"}
	{"level":"warn","ts":"2024-04-08T11:41:31.292672Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7ff681eaaadd5fcd","error":"Get \"https://192.168.39.219:2380/version\": dial tcp 192.168.39.219:2380: i/o timeout"}
	{"level":"warn","ts":"2024-04-08T11:41:31.339845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.350958Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.352982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.36147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.366185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.369391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.380971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.388752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.397999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.401413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.405417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.415291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.42249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.430419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.434819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.438962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.445464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.452769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.454101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:41:31.462047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:41:31 up 7 min,  0 users,  load average: 0.44, 0.35, 0.18
	Linux ha-438604 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654] <==
	I0408 11:40:54.008057       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:41:04.024488       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:41:04.024696       1 main.go:227] handling current node
	I0408 11:41:04.024789       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:41:04.024850       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:41:04.025068       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:41:04.025131       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:41:04.025278       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:41:04.025331       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:41:14.049145       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:41:14.049346       1 main.go:227] handling current node
	I0408 11:41:14.049454       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:41:14.049482       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:41:14.049922       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:41:14.050011       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:41:14.050180       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:41:14.050268       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:41:24.062306       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:41:24.062413       1 main.go:227] handling current node
	I0408 11:41:24.062436       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:41:24.062453       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:41:24.062660       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:41:24.062699       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:41:24.062761       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:41:24.062779       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842] <==
	I0408 11:34:44.447203       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0408 11:34:44.447498       1 aggregator.go:165] initial CRD sync complete...
	I0408 11:34:44.447609       1 autoregister_controller.go:141] Starting autoregister controller
	I0408 11:34:44.447672       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 11:34:44.447696       1 cache.go:39] Caches are synced for autoregister controller
	I0408 11:34:44.450089       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0408 11:34:44.486863       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 11:34:44.531256       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0408 11:34:45.337337       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0408 11:34:45.345590       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0408 11:34:45.345625       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 11:34:46.059164       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 11:34:46.115853       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 11:34:46.343177       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0408 11:34:46.359981       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.99]
	I0408 11:34:46.361304       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 11:34:46.366968       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 11:34:46.383411       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0408 11:34:48.217897       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0408 11:34:48.237197       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0408 11:34:48.247479       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0408 11:35:00.313205       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0408 11:35:00.386408       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E0408 11:38:04.022997       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.99:39356->192.168.39.219:10250: write: broken pipe
	W0408 11:39:26.277123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94 192.168.39.99]
	
	
	==> kube-controller-manager [3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef] <==
	I0408 11:37:58.811744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="359.036525ms"
	E0408 11:37:58.811797       1 replica_set.go:557] sync "default/busybox-7fdf7869d9" failed with Operation cannot be fulfilled on replicasets.apps "busybox-7fdf7869d9": the object has been modified; please apply your changes to the latest version and try again
	I0408 11:37:58.811945       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="109.014µs"
	I0408 11:37:58.817952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="430.984µs"
	I0408 11:38:01.795183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="19.766662ms"
	I0408 11:38:01.795321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.204µs"
	I0408 11:38:01.879504       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="41.687187ms"
	I0408 11:38:01.879896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="107.994µs"
	I0408 11:38:02.348230       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.211236ms"
	I0408 11:38:02.348610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="255.762µs"
	E0408 11:38:37.684491       1 certificate_controller.go:146] Sync csr-mmx9w failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-mmx9w": the object has been modified; please apply your changes to the latest version and try again
	I0408 11:38:37.987772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-438604-m04\" does not exist"
	I0408 11:38:38.012469       1 range_allocator.go:380] "Set node PodCIDR" node="ha-438604-m04" podCIDRs=["10.244.3.0/24"]
	I0408 11:38:38.036232       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7zhbb"
	I0408 11:38:38.036468       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8rrcs"
	I0408 11:38:38.217696       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-bpf4z"
	I0408 11:38:38.222508       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-l987x"
	I0408 11:38:38.270564       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-hv8lp"
	I0408 11:38:38.270615       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-7zhbb"
	I0408 11:38:39.583859       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-438604-m04"
	I0408 11:38:39.584018       1 event.go:376] "Event occurred" object="ha-438604-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller"
	I0408 11:38:48.493317       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-438604-m04"
	I0408 11:39:47.944280       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-438604-m04"
	I0408 11:39:48.036847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.577014ms"
	I0408 11:39:48.037153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="131.979µs"
	
	
	==> kube-proxy [a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119] <==
	I0408 11:35:02.658090       1 server_others.go:72] "Using iptables proxy"
	I0408 11:35:02.690993       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	I0408 11:35:02.734340       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 11:35:02.734424       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 11:35:02.734491       1 server_others.go:168] "Using iptables Proxier"
	I0408 11:35:02.738506       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 11:35:02.739305       1 server.go:865] "Version info" version="v1.29.3"
	I0408 11:35:02.739380       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:35:02.741210       1 config.go:188] "Starting service config controller"
	I0408 11:35:02.741459       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 11:35:02.741634       1 config.go:97] "Starting endpoint slice config controller"
	I0408 11:35:02.741670       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 11:35:02.742775       1 config.go:315] "Starting node config controller"
	I0408 11:35:02.742855       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 11:35:02.841944       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 11:35:02.842005       1 shared_informer.go:318] Caches are synced for service config
	I0408 11:35:02.843308       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a] <==
	W0408 11:34:45.455973       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 11:34:45.456066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 11:34:45.470402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 11:34:45.470455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 11:34:45.476434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 11:34:45.476484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 11:34:45.550192       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 11:34:45.550244       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 11:34:45.590975       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 11:34:45.591244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 11:34:45.635661       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 11:34:45.635714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0408 11:34:45.715450       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 11:34:45.715630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 11:34:45.718741       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:34:45.718796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0408 11:34:48.193426       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0408 11:37:58.195651       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-jz4h9\": pod busybox-7fdf7869d9-jz4h9 is already assigned to node \"ha-438604-m02\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-jz4h9" node="ha-438604-m02"
	E0408 11:37:58.197761       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 4a6f771f-16dd-4c0c-8d7d-c435b6e95b4f(default/busybox-7fdf7869d9-jz4h9) wasn't assumed so cannot be forgotten"
	E0408 11:37:58.197988       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-jz4h9\": pod busybox-7fdf7869d9-jz4h9 is already assigned to node \"ha-438604-m02\"" pod="default/busybox-7fdf7869d9-jz4h9"
	I0408 11:37:58.198278       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-jz4h9" node="ha-438604-m02"
	E0408 11:37:58.253234       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-cdh5l\": pod busybox-7fdf7869d9-cdh5l is already assigned to node \"ha-438604\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-cdh5l" node="ha-438604"
	E0408 11:37:58.254055       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod a83c06a6-d809-4c17-a406-3f1d4b9cfaf7(default/busybox-7fdf7869d9-cdh5l) wasn't assumed so cannot be forgotten"
	E0408 11:37:58.254322       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-cdh5l\": pod busybox-7fdf7869d9-cdh5l is already assigned to node \"ha-438604\"" pod="default/busybox-7fdf7869d9-cdh5l"
	I0408 11:37:58.254493       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-cdh5l" node="ha-438604"
	
	
	==> kubelet <==
	Apr 08 11:36:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:36:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:37:48 ha-438604 kubelet[1376]: E0408 11:37:48.492400    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:37:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:37:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:37:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:37:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:37:58 ha-438604 kubelet[1376]: I0408 11:37:58.216755    1376 topology_manager.go:215] "Topology Admit Handler" podUID="a83c06a6-d809-4c17-a406-3f1d4b9cfaf7" podNamespace="default" podName="busybox-7fdf7869d9-cdh5l"
	Apr 08 11:37:58 ha-438604 kubelet[1376]: I0408 11:37:58.260908    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdv2\" (UniqueName: \"kubernetes.io/projected/a83c06a6-d809-4c17-a406-3f1d4b9cfaf7-kube-api-access-ncdv2\") pod \"busybox-7fdf7869d9-cdh5l\" (UID: \"a83c06a6-d809-4c17-a406-3f1d4b9cfaf7\") " pod="default/busybox-7fdf7869d9-cdh5l"
	Apr 08 11:38:02 ha-438604 kubelet[1376]: I0408 11:38:02.309128    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-cdh5l" podStartSLOduration=1.679330254 podStartE2EDuration="4.309015456s" podCreationTimestamp="2024-04-08 11:37:58 +0000 UTC" firstStartedPulling="2024-04-08 11:37:58.925614531 +0000 UTC m=+190.737965944" lastFinishedPulling="2024-04-08 11:38:01.555299729 +0000 UTC m=+193.367651146" observedRunningTime="2024-04-08 11:38:02.307680619 +0000 UTC m=+194.120032032" watchObservedRunningTime="2024-04-08 11:38:02.309015456 +0000 UTC m=+194.121366902"
	Apr 08 11:38:48 ha-438604 kubelet[1376]: E0408 11:38:48.498609    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:38:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:38:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:38:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:38:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:39:48 ha-438604 kubelet[1376]: E0408 11:39:48.492700    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:39:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:39:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:39:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:39:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:40:48 ha-438604 kubelet[1376]: E0408 11:40:48.492951    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:40:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:40:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:40:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:40:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-438604 -n ha-438604
helpers_test.go:261: (dbg) Run:  kubectl --context ha-438604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 3 (3.1846515s)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:41:36.237631  390715 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:41:36.237918  390715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:36.237929  390715 out.go:304] Setting ErrFile to fd 2...
	I0408 11:41:36.237934  390715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:36.238138  390715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:41:36.238356  390715 out.go:298] Setting JSON to false
	I0408 11:41:36.238385  390715 mustload.go:65] Loading cluster: ha-438604
	I0408 11:41:36.238494  390715 notify.go:220] Checking for updates...
	I0408 11:41:36.238854  390715 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:41:36.238874  390715 status.go:255] checking status of ha-438604 ...
	I0408 11:41:36.239289  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:36.239365  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:36.260428  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0408 11:41:36.260992  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:36.261703  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:36.261734  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:36.262173  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:36.262441  390715 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:41:36.264083  390715 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:41:36.264107  390715 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:36.264413  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:36.264465  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:36.281698  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0408 11:41:36.282180  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:36.282700  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:36.282728  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:36.283101  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:36.283360  390715 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:41:36.285858  390715 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:36.286366  390715 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:36.286404  390715 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:36.286542  390715 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:36.286911  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:36.286966  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:36.306227  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
	I0408 11:41:36.306778  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:36.307390  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:36.307423  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:36.307808  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:36.308054  390715 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:41:36.308266  390715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:36.308313  390715 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:41:36.312020  390715 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:36.312684  390715 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:36.312712  390715 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:36.312892  390715 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:41:36.313205  390715 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:41:36.313383  390715 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:41:36.313553  390715 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:41:36.402381  390715 ssh_runner.go:195] Run: systemctl --version
	I0408 11:41:36.410188  390715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:36.429788  390715 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:36.429835  390715 api_server.go:166] Checking apiserver status ...
	I0408 11:41:36.429880  390715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:36.446341  390715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:41:36.458809  390715 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:36.458900  390715 ssh_runner.go:195] Run: ls
	I0408 11:41:36.468882  390715 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:36.476593  390715 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:36.476630  390715 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:41:36.476642  390715 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:36.476666  390715 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:41:36.477140  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:36.477198  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:36.492884  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0408 11:41:36.493361  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:36.493840  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:36.493866  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:36.494272  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:36.494500  390715 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:41:36.496517  390715 status.go:330] ha-438604-m02 host status = "Running" (err=<nil>)
	I0408 11:41:36.496539  390715 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:36.496830  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:36.496870  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:36.513640  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0408 11:41:36.514097  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:36.514734  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:36.514764  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:36.515125  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:36.515455  390715 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:41:36.518603  390715 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:36.519126  390715 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:36.519158  390715 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:36.519362  390715 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:36.519703  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:36.519760  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:36.534918  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33217
	I0408 11:41:36.535469  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:36.536045  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:36.536068  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:36.536387  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:36.536603  390715 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:41:36.536821  390715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:36.536847  390715 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:41:36.539532  390715 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:36.540097  390715 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:36.540135  390715 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:36.540276  390715 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:41:36.540585  390715 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:41:36.540781  390715 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:41:36.540919  390715 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	W0408 11:41:38.992073  390715 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:41:38.992186  390715 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	E0408 11:41:38.992201  390715 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:38.992208  390715 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0408 11:41:38.992227  390715 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:38.992236  390715 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:41:38.992570  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:38.992621  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:39.007972  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I0408 11:41:39.008413  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:39.008923  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:39.008948  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:39.009234  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:39.009428  390715 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:41:39.011100  390715 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:41:39.011121  390715 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:39.011544  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:39.011594  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:39.027527  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0408 11:41:39.028078  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:39.028708  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:39.028741  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:39.029051  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:39.029245  390715 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:41:39.031989  390715 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:39.032477  390715 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:39.032501  390715 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:39.032600  390715 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:39.032938  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:39.033000  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:39.048051  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
	I0408 11:41:39.048482  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:39.049042  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:39.049068  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:39.049408  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:39.049643  390715 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:41:39.049903  390715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:39.049931  390715 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:41:39.052940  390715 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:39.053390  390715 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:39.053413  390715 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:39.053576  390715 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:41:39.053751  390715 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:41:39.053882  390715 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:41:39.054005  390715 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:41:39.140999  390715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:39.159270  390715 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:39.159310  390715 api_server.go:166] Checking apiserver status ...
	I0408 11:41:39.159362  390715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:39.175916  390715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:41:39.186738  390715 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:39.186812  390715 ssh_runner.go:195] Run: ls
	I0408 11:41:39.192343  390715 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:39.197417  390715 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:39.197442  390715 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:41:39.197452  390715 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:39.197487  390715 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:41:39.197870  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:39.197910  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:39.213273  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0408 11:41:39.213807  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:39.214308  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:39.214331  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:39.214630  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:39.214837  390715 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:41:39.216689  390715 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:41:39.216710  390715 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:39.217040  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:39.217093  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:39.231903  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39297
	I0408 11:41:39.232356  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:39.232856  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:39.232883  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:39.233176  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:39.233379  390715 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:41:39.236227  390715 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:39.236651  390715 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:39.236686  390715 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:39.236850  390715 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:39.237207  390715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:39.237253  390715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:39.251960  390715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42297
	I0408 11:41:39.252357  390715 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:39.252860  390715 main.go:141] libmachine: Using API Version  1
	I0408 11:41:39.252883  390715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:39.253178  390715 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:39.253352  390715 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:41:39.253523  390715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:39.253544  390715 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:41:39.255933  390715 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:39.256433  390715 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:39.256455  390715 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:39.256602  390715 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:41:39.256759  390715 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:41:39.256913  390715 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:41:39.257022  390715 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:41:39.340657  390715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:39.358724  390715 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 3 (5.103127148s)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:41:40.787455  390813 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:41:40.787617  390813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:40.787628  390813 out.go:304] Setting ErrFile to fd 2...
	I0408 11:41:40.787633  390813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:40.787873  390813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:41:40.788080  390813 out.go:298] Setting JSON to false
	I0408 11:41:40.788110  390813 mustload.go:65] Loading cluster: ha-438604
	I0408 11:41:40.788162  390813 notify.go:220] Checking for updates...
	I0408 11:41:40.788639  390813 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:41:40.788664  390813 status.go:255] checking status of ha-438604 ...
	I0408 11:41:40.789197  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:40.789282  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:40.811747  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
	I0408 11:41:40.812478  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:40.813261  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:40.813307  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:40.813742  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:40.814006  390813 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:41:40.816210  390813 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:41:40.816229  390813 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:40.816555  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:40.816603  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:40.832454  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I0408 11:41:40.832900  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:40.833465  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:40.833514  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:40.833904  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:40.834171  390813 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:41:40.836986  390813 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:40.837409  390813 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:40.837447  390813 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:40.837679  390813 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:40.837982  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:40.838038  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:40.854815  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0408 11:41:40.855309  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:40.855908  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:40.855945  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:40.856402  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:40.856700  390813 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:41:40.856985  390813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:40.857012  390813 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:41:40.860308  390813 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:40.860754  390813 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:40.860788  390813 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:40.860953  390813 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:41:40.861176  390813 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:41:40.861368  390813 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:41:40.861527  390813 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:41:40.948111  390813 ssh_runner.go:195] Run: systemctl --version
	I0408 11:41:40.955710  390813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:40.973500  390813 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:40.973546  390813 api_server.go:166] Checking apiserver status ...
	I0408 11:41:40.973596  390813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:40.990274  390813 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:41:41.001896  390813 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:41.001971  390813 ssh_runner.go:195] Run: ls
	I0408 11:41:41.007531  390813 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:41.013547  390813 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:41.013581  390813 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:41:41.013594  390813 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:41.013619  390813 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:41:41.014056  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:41.014104  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:41.029630  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I0408 11:41:41.030066  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:41.030605  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:41.030633  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:41.030964  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:41.031163  390813 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:41:41.032982  390813 status.go:330] ha-438604-m02 host status = "Running" (err=<nil>)
	I0408 11:41:41.033008  390813 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:41.033315  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:41.033354  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:41.048768  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36149
	I0408 11:41:41.049197  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:41.049728  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:41.049763  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:41.050222  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:41.050450  390813 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:41:41.053363  390813 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:41.053802  390813 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:41.053832  390813 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:41.054002  390813 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:41.054327  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:41.054369  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:41.069321  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0408 11:41:41.069865  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:41.070407  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:41.070435  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:41.070820  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:41.071071  390813 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:41:41.071336  390813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:41.071368  390813 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:41:41.074341  390813 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:41.074867  390813 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:41.074900  390813 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:41.075084  390813 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:41:41.075288  390813 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:41:41.075469  390813 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:41:41.075637  390813 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	W0408 11:41:42.063978  390813 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:42.064034  390813 retry.go:31] will retry after 336.999785ms: dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:41:45.456065  390813 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:41:45.456179  390813 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	E0408 11:41:45.456208  390813 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:45.456220  390813 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0408 11:41:45.456270  390813 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:45.456287  390813 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:41:45.456648  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:45.456716  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:45.475033  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45115
	I0408 11:41:45.475514  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:45.476043  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:45.476078  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:45.476376  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:45.476647  390813 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:41:45.478328  390813 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:41:45.478352  390813 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:45.478682  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:45.478736  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:45.494573  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
	I0408 11:41:45.495039  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:45.495529  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:45.495554  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:45.495933  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:45.496202  390813 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:41:45.499068  390813 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:45.499540  390813 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:45.499567  390813 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:45.499730  390813 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:45.500256  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:45.500305  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:45.516066  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I0408 11:41:45.516629  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:45.517154  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:45.517181  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:45.517501  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:45.517710  390813 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:41:45.517941  390813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:45.517965  390813 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:41:45.521078  390813 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:45.521508  390813 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:45.521542  390813 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:45.521670  390813 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:41:45.521883  390813 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:41:45.522055  390813 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:41:45.522215  390813 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:41:45.608219  390813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:45.624789  390813 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:45.624820  390813 api_server.go:166] Checking apiserver status ...
	I0408 11:41:45.624854  390813 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:45.640208  390813 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:41:45.652181  390813 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:45.652269  390813 ssh_runner.go:195] Run: ls
	I0408 11:41:45.658427  390813 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:45.665379  390813 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:45.665407  390813 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:41:45.665417  390813 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:45.665434  390813 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:41:45.665790  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:45.665834  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:45.681320  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37279
	I0408 11:41:45.681741  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:45.682195  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:45.682216  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:45.682601  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:45.682782  390813 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:41:45.684564  390813 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:41:45.684580  390813 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:45.684865  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:45.684909  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:45.700600  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I0408 11:41:45.701056  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:45.701559  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:45.701579  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:45.701907  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:45.702141  390813 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:41:45.704938  390813 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:45.705443  390813 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:45.705475  390813 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:45.705623  390813 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:45.705995  390813 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:45.706043  390813 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:45.721246  390813 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0408 11:41:45.721835  390813 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:45.722303  390813 main.go:141] libmachine: Using API Version  1
	I0408 11:41:45.722323  390813 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:45.722674  390813 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:45.722878  390813 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:41:45.723070  390813 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:45.723113  390813 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:41:45.725912  390813 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:45.726409  390813 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:45.726456  390813 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:45.726574  390813 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:41:45.726779  390813 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:41:45.726945  390813 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:41:45.727091  390813 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:41:45.812447  390813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:45.829303  390813 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 3 (5.244620477s)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:41:46.794688  390914 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:41:46.794835  390914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:46.794847  390914 out.go:304] Setting ErrFile to fd 2...
	I0408 11:41:46.794854  390914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:46.795059  390914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:41:46.795321  390914 out.go:298] Setting JSON to false
	I0408 11:41:46.795352  390914 mustload.go:65] Loading cluster: ha-438604
	I0408 11:41:46.795480  390914 notify.go:220] Checking for updates...
	I0408 11:41:46.795831  390914 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:41:46.795851  390914 status.go:255] checking status of ha-438604 ...
	I0408 11:41:46.796281  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:46.796352  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:46.817582  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I0408 11:41:46.818055  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:46.818820  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:46.818856  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:46.819615  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:46.820055  390914 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:41:46.823148  390914 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:41:46.823177  390914 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:46.823681  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:46.823820  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:46.839571  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0408 11:41:46.840021  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:46.840524  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:46.840557  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:46.840896  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:46.841089  390914 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:41:46.843935  390914 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:46.844311  390914 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:46.844343  390914 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:46.844426  390914 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:46.844850  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:46.844907  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:46.860605  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45515
	I0408 11:41:46.861113  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:46.861648  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:46.861673  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:46.862000  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:46.862207  390914 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:41:46.862415  390914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:46.862446  390914 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:41:46.865195  390914 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:46.865592  390914 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:46.865620  390914 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:46.865755  390914 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:41:46.865954  390914 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:41:46.866142  390914 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:41:46.866332  390914 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:41:46.956152  390914 ssh_runner.go:195] Run: systemctl --version
	I0408 11:41:46.962905  390914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:46.980348  390914 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:46.980389  390914 api_server.go:166] Checking apiserver status ...
	I0408 11:41:46.980425  390914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:46.995999  390914 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:41:47.006547  390914 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:47.006619  390914 ssh_runner.go:195] Run: ls
	I0408 11:41:47.011864  390914 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:47.016429  390914 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:47.016456  390914 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:41:47.016467  390914 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:47.016494  390914 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:41:47.016802  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:47.016836  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:47.032272  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36353
	I0408 11:41:47.033267  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:47.033914  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:47.033940  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:47.034469  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:47.035193  390914 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:41:47.036953  390914 status.go:330] ha-438604-m02 host status = "Running" (err=<nil>)
	I0408 11:41:47.036972  390914 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:47.037307  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:47.037347  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:47.053498  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0408 11:41:47.053920  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:47.054368  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:47.054394  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:47.054793  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:47.055051  390914 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:41:47.057785  390914 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:47.058269  390914 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:47.058305  390914 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:47.058406  390914 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:47.058731  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:47.058776  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:47.074351  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0408 11:41:47.074834  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:47.075461  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:47.075497  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:47.075918  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:47.076150  390914 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:41:47.076369  390914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:47.076392  390914 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:41:47.079144  390914 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:47.079620  390914 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:47.079648  390914 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:47.079761  390914 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:41:47.079948  390914 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:41:47.080111  390914 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:41:47.080257  390914 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	W0408 11:41:48.528110  390914 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:48.528168  390914 retry.go:31] will retry after 142.230186ms: dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:41:51.600016  390914 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:41:51.600130  390914 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	E0408 11:41:51.600152  390914 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:51.600161  390914 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0408 11:41:51.600206  390914 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:51.600221  390914 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:41:51.600619  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:51.600670  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:51.616023  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0408 11:41:51.616539  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:51.617089  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:51.617118  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:51.617497  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:51.617683  390914 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:41:51.619525  390914 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:41:51.619545  390914 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:51.619897  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:51.619960  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:51.637293  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0408 11:41:51.637861  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:51.638359  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:51.638386  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:51.638735  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:51.638947  390914 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:41:51.642022  390914 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:51.642493  390914 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:51.642541  390914 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:51.642693  390914 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:51.643002  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:51.643040  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:51.659387  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
	I0408 11:41:51.659928  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:51.660390  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:51.660415  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:51.660837  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:51.661004  390914 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:41:51.661234  390914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:51.661255  390914 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:41:51.664269  390914 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:51.664729  390914 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:51.664776  390914 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:51.664922  390914 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:41:51.665144  390914 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:41:51.665309  390914 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:41:51.665496  390914 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:41:51.754696  390914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:51.771010  390914 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:51.771052  390914 api_server.go:166] Checking apiserver status ...
	I0408 11:41:51.771089  390914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:51.788388  390914 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:41:51.800665  390914 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:51.800718  390914 ssh_runner.go:195] Run: ls
	I0408 11:41:51.806210  390914 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:51.812509  390914 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:51.812555  390914 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:41:51.812570  390914 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:51.812598  390914 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:41:51.812930  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:51.812970  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:51.828652  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0408 11:41:51.829077  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:51.829575  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:51.829599  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:51.829944  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:51.830156  390914 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:41:51.831729  390914 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:41:51.831749  390914 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:51.832141  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:51.832186  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:51.847608  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0408 11:41:51.848191  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:51.848741  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:51.848775  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:51.849167  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:51.849344  390914 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:41:51.852990  390914 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:51.853450  390914 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:51.853543  390914 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:51.853763  390914 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:51.854211  390914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:51.854270  390914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:51.870143  390914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40831
	I0408 11:41:51.870646  390914 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:51.871279  390914 main.go:141] libmachine: Using API Version  1
	I0408 11:41:51.871301  390914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:51.871711  390914 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:51.871921  390914 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:41:51.872160  390914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:51.872186  390914 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:41:51.875438  390914 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:51.876011  390914 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:51.876043  390914 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:51.876240  390914 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:41:51.876448  390914 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:41:51.876618  390914 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:41:51.876753  390914 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:41:51.960505  390914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:51.976980  390914 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 3 (4.494012901s)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:41:53.812169  391030 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:41:53.812444  391030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:53.812454  391030 out.go:304] Setting ErrFile to fd 2...
	I0408 11:41:53.812458  391030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:41:53.812667  391030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:41:53.812874  391030 out.go:298] Setting JSON to false
	I0408 11:41:53.812910  391030 mustload.go:65] Loading cluster: ha-438604
	I0408 11:41:53.813025  391030 notify.go:220] Checking for updates...
	I0408 11:41:53.813328  391030 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:41:53.813347  391030 status.go:255] checking status of ha-438604 ...
	I0408 11:41:53.813749  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:53.813815  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:53.834633  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37939
	I0408 11:41:53.835189  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:53.835753  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:53.835778  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:53.836185  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:53.836408  391030 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:41:53.838135  391030 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:41:53.838152  391030 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:53.838440  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:53.838487  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:53.854002  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
	I0408 11:41:53.854500  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:53.855089  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:53.855116  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:53.855560  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:53.855820  391030 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:41:53.859290  391030 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:53.859759  391030 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:53.859798  391030 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:53.859967  391030 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:41:53.860307  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:53.860360  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:53.876185  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0408 11:41:53.876607  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:53.877117  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:53.877139  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:53.877692  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:53.877939  391030 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:41:53.878185  391030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:53.878228  391030 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:41:53.881179  391030 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:53.881570  391030 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:41:53.881602  391030 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:41:53.881772  391030 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:41:53.881966  391030 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:41:53.882128  391030 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:41:53.882267  391030 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:41:53.974121  391030 ssh_runner.go:195] Run: systemctl --version
	I0408 11:41:53.982492  391030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:54.002932  391030 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:54.002984  391030 api_server.go:166] Checking apiserver status ...
	I0408 11:41:54.003049  391030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:54.025087  391030 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:41:54.042948  391030 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:54.043017  391030 ssh_runner.go:195] Run: ls
	I0408 11:41:54.047836  391030 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:54.052167  391030 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:54.052195  391030 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:41:54.052206  391030 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:54.052223  391030 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:41:54.052528  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:54.052577  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:54.068271  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0408 11:41:54.068785  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:54.069382  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:54.069414  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:54.069798  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:54.070014  391030 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:41:54.071643  391030 status.go:330] ha-438604-m02 host status = "Running" (err=<nil>)
	I0408 11:41:54.071662  391030 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:54.071977  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:54.072016  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:54.087360  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0408 11:41:54.087863  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:54.088436  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:54.088466  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:54.088814  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:54.089055  391030 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:41:54.092273  391030 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:54.092729  391030 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:54.092750  391030 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:54.092924  391030 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:41:54.093390  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:54.093440  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:54.109536  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I0408 11:41:54.110059  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:54.110563  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:54.110591  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:54.110936  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:54.111176  391030 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:41:54.111405  391030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:54.111424  391030 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:41:54.114340  391030 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:54.114831  391030 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:41:54.114859  391030 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:41:54.114989  391030 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:41:54.115166  391030 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:41:54.115346  391030 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:41:54.115469  391030 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	W0408 11:41:54.675914  391030 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:54.675967  391030 retry.go:31] will retry after 134.486841ms: dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:41:57.872000  391030 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:41:57.872137  391030 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	E0408 11:41:57.872163  391030 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:57.872178  391030 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0408 11:41:57.872208  391030 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:41:57.872221  391030 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:41:57.872585  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:57.872647  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:57.888104  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0408 11:41:57.888656  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:57.889263  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:57.889297  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:57.889667  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:57.889904  391030 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:41:57.891619  391030 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:41:57.891640  391030 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:57.891980  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:57.892026  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:57.908642  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I0408 11:41:57.909087  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:57.909601  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:57.909625  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:57.909967  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:57.910183  391030 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:41:57.913253  391030 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:57.913770  391030 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:57.913807  391030 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:57.913938  391030 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:41:57.914252  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:57.914292  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:57.930262  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0408 11:41:57.930791  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:57.931289  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:57.931314  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:57.931624  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:57.931857  391030 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:41:57.932064  391030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:57.932091  391030 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:41:57.934997  391030 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:57.935452  391030 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:41:57.935479  391030 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:41:57.935664  391030 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:41:57.935871  391030 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:41:57.936030  391030 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:41:57.936144  391030 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:41:58.025287  391030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:58.042594  391030 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:41:58.042625  391030 api_server.go:166] Checking apiserver status ...
	I0408 11:41:58.042659  391030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:41:58.061177  391030 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:41:58.073275  391030 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:41:58.073347  391030 ssh_runner.go:195] Run: ls
	I0408 11:41:58.078699  391030 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:41:58.085642  391030 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:41:58.085673  391030 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:41:58.085684  391030 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:41:58.085706  391030 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:41:58.086082  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:58.086150  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:58.101753  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0408 11:41:58.102265  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:58.102764  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:58.102787  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:58.103127  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:58.103325  391030 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:41:58.105000  391030 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:41:58.105021  391030 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:58.105625  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:58.105682  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:58.122341  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0408 11:41:58.122786  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:58.123272  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:58.123293  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:58.123633  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:58.123878  391030 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:41:58.126678  391030 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:58.127086  391030 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:58.127108  391030 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:58.127235  391030 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:41:58.127543  391030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:41:58.127580  391030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:41:58.142988  391030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
	I0408 11:41:58.143412  391030 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:41:58.143959  391030 main.go:141] libmachine: Using API Version  1
	I0408 11:41:58.143988  391030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:41:58.144369  391030 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:41:58.144564  391030 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:41:58.144792  391030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:41:58.144812  391030 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:41:58.147609  391030 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:58.147996  391030 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:41:58.148028  391030 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:41:58.148185  391030 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:41:58.148366  391030 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:41:58.148526  391030 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:41:58.148684  391030 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:41:58.231351  391030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:41:58.246058  391030 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 3 (3.783370944s)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:42:02.972493  391131 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:42:02.972648  391131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:02.972658  391131 out.go:304] Setting ErrFile to fd 2...
	I0408 11:42:02.972663  391131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:02.972872  391131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:42:02.973093  391131 out.go:298] Setting JSON to false
	I0408 11:42:02.973128  391131 mustload.go:65] Loading cluster: ha-438604
	I0408 11:42:02.973174  391131 notify.go:220] Checking for updates...
	I0408 11:42:02.973599  391131 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:42:02.973621  391131 status.go:255] checking status of ha-438604 ...
	I0408 11:42:02.974061  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:02.974151  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:02.996324  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0408 11:42:02.996940  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:02.997653  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:02.997680  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:02.998064  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:02.998260  391131 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:42:03.000007  391131 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:42:03.000025  391131 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:42:03.000290  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:03.000325  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:03.016612  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0408 11:42:03.017122  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:03.017834  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:03.017902  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:03.018362  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:03.018642  391131 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:42:03.021671  391131 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:03.022066  391131 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:42:03.022100  391131 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:03.022216  391131 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:42:03.022564  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:03.022613  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:03.040314  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0408 11:42:03.040773  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:03.041308  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:03.041343  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:03.041681  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:03.041902  391131 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:42:03.042160  391131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:03.042197  391131 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:42:03.045357  391131 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:03.045776  391131 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:42:03.045819  391131 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:03.045988  391131 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:42:03.046209  391131 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:42:03.046358  391131 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:42:03.046566  391131 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:42:03.132314  391131 ssh_runner.go:195] Run: systemctl --version
	I0408 11:42:03.138472  391131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:03.153233  391131 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:42:03.153271  391131 api_server.go:166] Checking apiserver status ...
	I0408 11:42:03.153306  391131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:42:03.168287  391131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:42:03.179902  391131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:42:03.179963  391131 ssh_runner.go:195] Run: ls
	I0408 11:42:03.184925  391131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:42:03.189438  391131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:42:03.189461  391131 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:42:03.189473  391131 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:03.189490  391131 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:42:03.189778  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:03.189817  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:03.205450  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41557
	I0408 11:42:03.206274  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:03.207647  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:03.207675  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:03.208065  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:03.208288  391131 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:42:03.209799  391131 status.go:330] ha-438604-m02 host status = "Running" (err=<nil>)
	I0408 11:42:03.209817  391131 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:42:03.210190  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:03.210276  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:03.225399  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0408 11:42:03.225858  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:03.226414  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:03.226444  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:03.226783  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:03.227012  391131 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:42:03.229643  391131 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:42:03.230086  391131 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:42:03.230119  391131 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:42:03.230240  391131 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:42:03.230704  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:03.230753  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:03.245869  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0408 11:42:03.246425  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:03.246886  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:03.246909  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:03.247276  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:03.247498  391131 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:42:03.247717  391131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:03.247742  391131 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:42:03.250385  391131 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:42:03.250883  391131 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:42:03.250911  391131 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:42:03.251037  391131 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:42:03.251235  391131 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:42:03.251401  391131 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:42:03.251546  391131 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	W0408 11:42:06.324013  391131 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.219:22: connect: no route to host
	W0408 11:42:06.324132  391131 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	E0408 11:42:06.324156  391131 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:42:06.324164  391131 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0408 11:42:06.324192  391131 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	I0408 11:42:06.324206  391131 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:42:06.324794  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:06.324864  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:06.340280  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I0408 11:42:06.340735  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:06.341317  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:06.341369  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:06.341816  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:06.342048  391131 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:42:06.343728  391131 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:42:06.343748  391131 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:42:06.344043  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:06.344096  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:06.360458  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I0408 11:42:06.360884  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:06.361426  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:06.361454  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:06.361842  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:06.362080  391131 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:42:06.364831  391131 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:06.365427  391131 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:06.365461  391131 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:06.365624  391131 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:42:06.365963  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:06.366006  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:06.381524  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33763
	I0408 11:42:06.381997  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:06.382513  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:06.382536  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:06.382898  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:06.383163  391131 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:42:06.383370  391131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:06.383395  391131 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:42:06.386541  391131 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:06.387047  391131 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:06.387079  391131 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:06.387269  391131 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:42:06.387457  391131 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:42:06.387614  391131 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:42:06.387799  391131 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:42:06.472526  391131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:06.489770  391131 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:42:06.489802  391131 api_server.go:166] Checking apiserver status ...
	I0408 11:42:06.489835  391131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:42:06.506392  391131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:42:06.518918  391131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:42:06.518993  391131 ssh_runner.go:195] Run: ls
	I0408 11:42:06.524995  391131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:42:06.529952  391131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:42:06.529992  391131 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:42:06.530007  391131 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:06.530030  391131 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:42:06.530354  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:06.530399  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:06.546027  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I0408 11:42:06.546559  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:06.547162  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:06.547184  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:06.547513  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:06.547760  391131 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:42:06.549427  391131 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:42:06.549446  391131 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:42:06.549732  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:06.549767  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:06.565275  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45691
	I0408 11:42:06.565757  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:06.566227  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:06.566254  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:06.566574  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:06.566775  391131 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:42:06.569616  391131 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:06.570131  391131 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:06.570169  391131 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:06.570334  391131 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:42:06.570652  391131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:06.570699  391131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:06.586661  391131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38215
	I0408 11:42:06.587076  391131 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:06.587642  391131 main.go:141] libmachine: Using API Version  1
	I0408 11:42:06.587664  391131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:06.588023  391131 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:06.588229  391131 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:42:06.588450  391131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:06.588479  391131 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:42:06.591599  391131 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:06.592045  391131 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:06.592089  391131 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:06.592225  391131 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:42:06.592417  391131 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:42:06.592561  391131 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:42:06.592757  391131 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:42:06.679655  391131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:06.694689  391131 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 7 (1.001928904s)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:42:12.153352  391247 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:42:12.153922  391247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:12.153967  391247 out.go:304] Setting ErrFile to fd 2...
	I0408 11:42:12.153984  391247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:12.154433  391247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:42:12.155109  391247 out.go:298] Setting JSON to false
	I0408 11:42:12.155146  391247 mustload.go:65] Loading cluster: ha-438604
	I0408 11:42:12.155276  391247 notify.go:220] Checking for updates...
	I0408 11:42:12.156049  391247 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:42:12.156074  391247 status.go:255] checking status of ha-438604 ...
	I0408 11:42:12.156477  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.156562  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.176656  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44629
	I0408 11:42:12.177287  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.178041  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.178089  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.178459  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.178680  391247 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:42:12.180683  391247 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:42:12.180708  391247 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:42:12.180992  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.181037  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.199182  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0408 11:42:12.199845  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.200488  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.200512  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.200954  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.201157  391247 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:42:12.204375  391247 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:12.204790  391247 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:42:12.204822  391247 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:12.205085  391247 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:42:12.205363  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.205401  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.220451  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34491
	I0408 11:42:12.220897  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.221433  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.221455  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.221797  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.222028  391247 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:42:12.222245  391247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:12.222272  391247 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:42:12.225276  391247 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:12.225766  391247 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:42:12.225799  391247 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:12.225955  391247 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:42:12.226148  391247 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:42:12.226317  391247 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:42:12.226459  391247 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:42:12.320460  391247 ssh_runner.go:195] Run: systemctl --version
	I0408 11:42:12.327174  391247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:12.342776  391247 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:42:12.342822  391247 api_server.go:166] Checking apiserver status ...
	I0408 11:42:12.342884  391247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:42:12.375173  391247 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:42:12.390247  391247 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:42:12.390315  391247 ssh_runner.go:195] Run: ls
	I0408 11:42:12.396402  391247 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:42:12.401079  391247 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:42:12.401108  391247 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:42:12.401119  391247 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:12.401141  391247 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:42:12.401575  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.401627  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.418615  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0408 11:42:12.419139  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.419733  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.419768  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.420158  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.420395  391247 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:42:12.718669  391247 status.go:330] ha-438604-m02 host status = "Stopped" (err=<nil>)
	I0408 11:42:12.718699  391247 status.go:343] host is not running, skipping remaining checks
	I0408 11:42:12.718706  391247 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:12.718727  391247 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:42:12.719032  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.719081  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.734768  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0408 11:42:12.735269  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.735827  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.735861  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.736303  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.736551  391247 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:42:12.738579  391247 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:42:12.738606  391247 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:42:12.738943  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.738993  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.754646  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46245
	I0408 11:42:12.755129  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.755716  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.755750  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.756136  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.756430  391247 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:42:12.759585  391247 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:12.760072  391247 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:12.760115  391247 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:12.760219  391247 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:42:12.760611  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.760662  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.776626  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0408 11:42:12.777111  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.777660  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.777690  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.778073  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.778331  391247 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:42:12.778542  391247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:12.778569  391247 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:42:12.781368  391247 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:12.781802  391247 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:12.781847  391247 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:12.781981  391247 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:42:12.782209  391247 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:42:12.782387  391247 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:42:12.782538  391247 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:42:12.868662  391247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:12.887533  391247 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:42:12.887570  391247 api_server.go:166] Checking apiserver status ...
	I0408 11:42:12.887609  391247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:42:12.902913  391247 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:42:12.914910  391247 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:42:12.914993  391247 ssh_runner.go:195] Run: ls
	I0408 11:42:12.919925  391247 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:42:12.926822  391247 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:42:12.926851  391247 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:42:12.926873  391247 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:12.926889  391247 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:42:12.927181  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.927221  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.943517  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I0408 11:42:12.944091  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.944646  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.944669  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.945063  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.945279  391247 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:42:12.947084  391247 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:42:12.947102  391247 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:42:12.947418  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.947462  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.964063  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0408 11:42:12.964563  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.965073  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.965105  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.965465  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.965654  391247 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:42:12.969096  391247 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:12.969559  391247 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:12.969587  391247 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:12.969746  391247 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:42:12.970061  391247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:12.970122  391247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:12.986761  391247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41985
	I0408 11:42:12.987300  391247 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:12.987848  391247 main.go:141] libmachine: Using API Version  1
	I0408 11:42:12.987876  391247 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:12.988230  391247 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:12.988438  391247 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:42:12.988607  391247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:12.988626  391247 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:42:12.991473  391247 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:12.992015  391247 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:12.992049  391247 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:12.992201  391247 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:42:12.992418  391247 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:42:12.992628  391247 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:42:12.992811  391247 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:42:13.075917  391247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:13.093012  391247 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 7 (669.511386ms)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:42:24.136984  391390 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:42:24.137217  391390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:24.137226  391390 out.go:304] Setting ErrFile to fd 2...
	I0408 11:42:24.137230  391390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:24.137442  391390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:42:24.137674  391390 out.go:298] Setting JSON to false
	I0408 11:42:24.137710  391390 mustload.go:65] Loading cluster: ha-438604
	I0408 11:42:24.137832  391390 notify.go:220] Checking for updates...
	I0408 11:42:24.138126  391390 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:42:24.138145  391390 status.go:255] checking status of ha-438604 ...
	I0408 11:42:24.138568  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.138625  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.159321  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40135
	I0408 11:42:24.159892  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.160577  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.160599  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.161042  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.161305  391390 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:42:24.163065  391390 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:42:24.163087  391390 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:42:24.163384  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.163431  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.179415  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0408 11:42:24.179920  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.180448  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.180475  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.180808  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.181023  391390 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:42:24.183848  391390 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:24.184225  391390 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:42:24.184260  391390 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:24.184412  391390 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:42:24.184733  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.184784  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.200282  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39617
	I0408 11:42:24.200753  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.201269  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.201293  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.201664  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.201863  391390 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:42:24.202114  391390 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:24.202156  391390 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:42:24.205079  391390 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:24.205536  391390 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:42:24.205567  391390 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:24.205752  391390 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:42:24.205984  391390 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:42:24.206185  391390 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:42:24.206391  391390 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:42:24.292887  391390 ssh_runner.go:195] Run: systemctl --version
	I0408 11:42:24.299585  391390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:24.315724  391390 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:42:24.315767  391390 api_server.go:166] Checking apiserver status ...
	I0408 11:42:24.315814  391390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:42:24.332110  391390 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:42:24.344289  391390 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:42:24.344356  391390 ssh_runner.go:195] Run: ls
	I0408 11:42:24.349135  391390 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:42:24.353600  391390 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:42:24.353634  391390 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:42:24.353649  391390 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:24.353671  391390 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:42:24.354105  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.354150  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.371043  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33557
	I0408 11:42:24.371466  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.372048  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.372071  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.372462  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.372661  391390 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:42:24.374384  391390 status.go:330] ha-438604-m02 host status = "Stopped" (err=<nil>)
	I0408 11:42:24.374401  391390 status.go:343] host is not running, skipping remaining checks
	I0408 11:42:24.374410  391390 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:24.374432  391390 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:42:24.374834  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.374889  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.389867  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44169
	I0408 11:42:24.390383  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.390889  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.390915  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.391248  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.391433  391390 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:42:24.392906  391390 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:42:24.392925  391390 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:42:24.393216  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.393255  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.408131  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I0408 11:42:24.408598  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.409066  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.409109  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.409474  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.409693  391390 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:42:24.412360  391390 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:24.412758  391390 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:24.412783  391390 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:24.412996  391390 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:42:24.413419  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.413474  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.429330  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45667
	I0408 11:42:24.429867  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.430422  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.430453  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.430775  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.431004  391390 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:42:24.431216  391390 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:24.431238  391390 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:42:24.433815  391390 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:24.434201  391390 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:24.434225  391390 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:24.434356  391390 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:42:24.434512  391390 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:42:24.434656  391390 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:42:24.434795  391390 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:42:24.521633  391390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:24.540810  391390 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:42:24.540841  391390 api_server.go:166] Checking apiserver status ...
	I0408 11:42:24.540892  391390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:42:24.556217  391390 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:42:24.567244  391390 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:42:24.567307  391390 ssh_runner.go:195] Run: ls
	I0408 11:42:24.572154  391390 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:42:24.576578  391390 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:42:24.576611  391390 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:42:24.576623  391390 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:24.576646  391390 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:42:24.577026  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.577083  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.592456  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I0408 11:42:24.592969  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.593479  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.593501  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.593912  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.594100  391390 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:42:24.595759  391390 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:42:24.595779  391390 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:42:24.596065  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.596098  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.612156  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41351
	I0408 11:42:24.612625  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.613133  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.613158  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.613512  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.613716  391390 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:42:24.616278  391390 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:24.616868  391390 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:24.616894  391390 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:24.617092  391390 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:42:24.617420  391390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:24.617487  391390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:24.633731  391390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0408 11:42:24.634217  391390 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:24.634778  391390 main.go:141] libmachine: Using API Version  1
	I0408 11:42:24.634803  391390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:24.635139  391390 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:24.635382  391390 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:42:24.635575  391390 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:24.635600  391390 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:42:24.638813  391390 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:24.639253  391390 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:24.639295  391390 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:24.639470  391390 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:42:24.639715  391390 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:42:24.639924  391390 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:42:24.640068  391390 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:42:24.728307  391390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:24.744451  391390 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 7 (681.355867ms)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-438604-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:42:32.438917  391478 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:42:32.439035  391478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:32.439052  391478 out.go:304] Setting ErrFile to fd 2...
	I0408 11:42:32.439056  391478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:32.439259  391478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:42:32.439442  391478 out.go:298] Setting JSON to false
	I0408 11:42:32.439474  391478 mustload.go:65] Loading cluster: ha-438604
	I0408 11:42:32.439595  391478 notify.go:220] Checking for updates...
	I0408 11:42:32.439888  391478 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:42:32.439905  391478 status.go:255] checking status of ha-438604 ...
	I0408 11:42:32.440306  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.440372  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.457976  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
	I0408 11:42:32.458528  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.459291  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.459324  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.459758  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.460010  391478 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:42:32.461797  391478 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:42:32.461819  391478 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:42:32.462211  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.462284  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.478912  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0408 11:42:32.479466  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.480059  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.480110  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.480482  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.480682  391478 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:42:32.483612  391478 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:32.484128  391478 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:42:32.484157  391478 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:32.484425  391478 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:42:32.484764  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.484819  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.500999  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0408 11:42:32.501556  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.502103  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.502137  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.502494  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.502719  391478 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:42:32.502932  391478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:32.502974  391478 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:42:32.505990  391478 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:32.506455  391478 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:42:32.506497  391478 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:42:32.506626  391478 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:42:32.506825  391478 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:42:32.507016  391478 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:42:32.507198  391478 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:42:32.596289  391478 ssh_runner.go:195] Run: systemctl --version
	I0408 11:42:32.604057  391478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:32.621997  391478 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:42:32.622038  391478 api_server.go:166] Checking apiserver status ...
	I0408 11:42:32.622074  391478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:42:32.640219  391478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup
	W0408 11:42:32.654825  391478 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1203/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:42:32.654896  391478 ssh_runner.go:195] Run: ls
	I0408 11:42:32.661379  391478 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:42:32.667916  391478 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:42:32.667951  391478 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:42:32.667964  391478 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:32.667993  391478 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:42:32.668338  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.668407  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.684122  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0408 11:42:32.684637  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.685192  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.685218  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.685555  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.685826  391478 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:42:32.687495  391478 status.go:330] ha-438604-m02 host status = "Stopped" (err=<nil>)
	I0408 11:42:32.687513  391478 status.go:343] host is not running, skipping remaining checks
	I0408 11:42:32.687522  391478 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:32.687554  391478 status.go:255] checking status of ha-438604-m03 ...
	I0408 11:42:32.687878  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.687921  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.704808  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36369
	I0408 11:42:32.705370  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.706026  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.706059  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.706533  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.707029  391478 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:42:32.708946  391478 status.go:330] ha-438604-m03 host status = "Running" (err=<nil>)
	I0408 11:42:32.708968  391478 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:42:32.709834  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.709882  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.726542  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36689
	I0408 11:42:32.727093  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.727703  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.727736  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.728063  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.728269  391478 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:42:32.730815  391478 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:32.731212  391478 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:32.731244  391478 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:32.731430  391478 host.go:66] Checking if "ha-438604-m03" exists ...
	I0408 11:42:32.731781  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.731831  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.747265  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I0408 11:42:32.747733  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.748264  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.748294  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.748707  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.748933  391478 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:42:32.749137  391478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:32.749162  391478 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:42:32.752027  391478 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:32.752505  391478 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:32.752539  391478 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:32.752650  391478 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:42:32.752844  391478 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:42:32.753065  391478 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:42:32.753219  391478 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:42:32.838231  391478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:32.855613  391478 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:42:32.855648  391478 api_server.go:166] Checking apiserver status ...
	I0408 11:42:32.855708  391478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:42:32.871590  391478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup
	W0408 11:42:32.882936  391478 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1499/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:42:32.882999  391478 ssh_runner.go:195] Run: ls
	I0408 11:42:32.888064  391478 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:42:32.892814  391478 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:42:32.892850  391478 status.go:422] ha-438604-m03 apiserver status = Running (err=<nil>)
	I0408 11:42:32.892859  391478 status.go:257] ha-438604-m03 status: &{Name:ha-438604-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:42:32.892881  391478 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:42:32.893256  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.893298  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.909000  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0408 11:42:32.909541  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.910027  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.910050  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.910477  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.910708  391478 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:42:32.912486  391478 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:42:32.912511  391478 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:42:32.912850  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.912892  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.928914  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I0408 11:42:32.929391  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.929888  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.929914  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.930252  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.930464  391478 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:42:32.933421  391478 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:32.933867  391478 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:32.933898  391478 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:32.934005  391478 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:42:32.934319  391478 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:32.934358  391478 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:32.949566  391478 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I0408 11:42:32.950007  391478 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:32.950531  391478 main.go:141] libmachine: Using API Version  1
	I0408 11:42:32.950553  391478 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:32.950882  391478 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:32.951100  391478 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:42:32.951306  391478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:42:32.951332  391478 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:42:32.954380  391478 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:32.954841  391478 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:32.954873  391478 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:32.955041  391478 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:42:32.955242  391478 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:42:32.955455  391478 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:42:32.955608  391478 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:42:33.039490  391478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:42:33.054944  391478 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-438604 -n ha-438604
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-438604 logs -n 25: (1.586562742s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604:/home/docker/cp-test_ha-438604-m03_ha-438604.txt                       |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604 sudo cat                                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604.txt                                 |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m02:/home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m02 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04:/home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m04 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp testdata/cp-test.txt                                                | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604:/home/docker/cp-test_ha-438604-m04_ha-438604.txt                       |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604 sudo cat                                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604.txt                                 |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m02:/home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m02 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03:/home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m03 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-438604 node stop m02 -v=7                                                     | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-438604 node start m02 -v=7                                                    | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:34:02
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:34:02.066668  385781 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:34:02.066787  385781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:34:02.066794  385781 out.go:304] Setting ErrFile to fd 2...
	I0408 11:34:02.066800  385781 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:34:02.067043  385781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:34:02.067747  385781 out.go:298] Setting JSON to false
	I0408 11:34:02.068775  385781 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4585,"bootTime":1712571457,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:34:02.068856  385781 start.go:139] virtualization: kvm guest
	I0408 11:34:02.071565  385781 out.go:177] * [ha-438604] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:34:02.073145  385781 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:34:02.073101  385781 notify.go:220] Checking for updates...
	I0408 11:34:02.074690  385781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:34:02.076361  385781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:34:02.077807  385781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:34:02.079178  385781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:34:02.080661  385781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:34:02.082398  385781 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:34:02.119763  385781 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 11:34:02.121146  385781 start.go:297] selected driver: kvm2
	I0408 11:34:02.121161  385781 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:34:02.121173  385781 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:34:02.121906  385781 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:34:02.121981  385781 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:34:02.137800  385781 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:34:02.137864  385781 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:34:02.138102  385781 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:34:02.138169  385781 cni.go:84] Creating CNI manager for ""
	I0408 11:34:02.138189  385781 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0408 11:34:02.138194  385781 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 11:34:02.138248  385781 start.go:340] cluster config:
	{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0408 11:34:02.138345  385781 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:34:02.140305  385781 out.go:177] * Starting "ha-438604" primary control-plane node in "ha-438604" cluster
	I0408 11:34:02.141699  385781 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:34:02.141751  385781 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 11:34:02.141759  385781 cache.go:56] Caching tarball of preloaded images
	I0408 11:34:02.141844  385781 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:34:02.141854  385781 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:34:02.142160  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:34:02.142181  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json: {Name:mk0dff9aa3ef342d215af92fdd6656ec72244fce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:02.142319  385781 start.go:360] acquireMachinesLock for ha-438604: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:34:02.142347  385781 start.go:364] duration metric: took 14.556µs to acquireMachinesLock for "ha-438604"
	I0408 11:34:02.142364  385781 start.go:93] Provisioning new machine with config: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:34:02.142413  385781 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 11:34:02.145227  385781 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:34:02.145437  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:34:02.145487  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:34:02.160659  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0408 11:34:02.161093  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:34:02.161657  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:34:02.161685  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:34:02.162140  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:34:02.162462  385781 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:34:02.162651  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:02.163159  385781 start.go:159] libmachine.API.Create for "ha-438604" (driver="kvm2")
	I0408 11:34:02.163198  385781 client.go:168] LocalClient.Create starting
	I0408 11:34:02.163239  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 11:34:02.163282  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:34:02.163301  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:34:02.163420  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 11:34:02.163445  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:34:02.163464  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:34:02.163495  385781 main.go:141] libmachine: Running pre-create checks...
	I0408 11:34:02.163510  385781 main.go:141] libmachine: (ha-438604) Calling .PreCreateCheck
	I0408 11:34:02.164642  385781 main.go:141] libmachine: (ha-438604) Calling .GetConfigRaw
	I0408 11:34:02.165113  385781 main.go:141] libmachine: Creating machine...
	I0408 11:34:02.165130  385781 main.go:141] libmachine: (ha-438604) Calling .Create
	I0408 11:34:02.165280  385781 main.go:141] libmachine: (ha-438604) Creating KVM machine...
	I0408 11:34:02.166552  385781 main.go:141] libmachine: (ha-438604) DBG | found existing default KVM network
	I0408 11:34:02.167251  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.167108  385804 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0408 11:34:02.167275  385781 main.go:141] libmachine: (ha-438604) DBG | created network xml: 
	I0408 11:34:02.167298  385781 main.go:141] libmachine: (ha-438604) DBG | <network>
	I0408 11:34:02.167318  385781 main.go:141] libmachine: (ha-438604) DBG |   <name>mk-ha-438604</name>
	I0408 11:34:02.167329  385781 main.go:141] libmachine: (ha-438604) DBG |   <dns enable='no'/>
	I0408 11:34:02.167344  385781 main.go:141] libmachine: (ha-438604) DBG |   
	I0408 11:34:02.167357  385781 main.go:141] libmachine: (ha-438604) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 11:34:02.167364  385781 main.go:141] libmachine: (ha-438604) DBG |     <dhcp>
	I0408 11:34:02.167374  385781 main.go:141] libmachine: (ha-438604) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 11:34:02.167382  385781 main.go:141] libmachine: (ha-438604) DBG |     </dhcp>
	I0408 11:34:02.167392  385781 main.go:141] libmachine: (ha-438604) DBG |   </ip>
	I0408 11:34:02.167403  385781 main.go:141] libmachine: (ha-438604) DBG |   
	I0408 11:34:02.167412  385781 main.go:141] libmachine: (ha-438604) DBG | </network>
	I0408 11:34:02.167423  385781 main.go:141] libmachine: (ha-438604) DBG | 
	I0408 11:34:02.172990  385781 main.go:141] libmachine: (ha-438604) DBG | trying to create private KVM network mk-ha-438604 192.168.39.0/24...
	I0408 11:34:02.238721  385781 main.go:141] libmachine: (ha-438604) DBG | private KVM network mk-ha-438604 192.168.39.0/24 created
	I0408 11:34:02.238760  385781 main.go:141] libmachine: (ha-438604) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604 ...
	I0408 11:34:02.238774  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.238654  385804 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:34:02.238788  385781 main.go:141] libmachine: (ha-438604) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:34:02.238850  385781 main.go:141] libmachine: (ha-438604) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 11:34:02.501016  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.500853  385804 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa...
	I0408 11:34:02.714632  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.714471  385804 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/ha-438604.rawdisk...
	I0408 11:34:02.714659  385781 main.go:141] libmachine: (ha-438604) DBG | Writing magic tar header
	I0408 11:34:02.714671  385781 main.go:141] libmachine: (ha-438604) DBG | Writing SSH key tar header
	I0408 11:34:02.714686  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:02.714604  385804 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604 ...
	I0408 11:34:02.714707  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604
	I0408 11:34:02.714854  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604 (perms=drwx------)
	I0408 11:34:02.714903  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 11:34:02.714916  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 11:34:02.714934  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 11:34:02.714946  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 11:34:02.714957  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 11:34:02.714970  385781 main.go:141] libmachine: (ha-438604) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 11:34:02.715003  385781 main.go:141] libmachine: (ha-438604) Creating domain...
	I0408 11:34:02.715021  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:34:02.715031  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 11:34:02.715043  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 11:34:02.715061  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home/jenkins
	I0408 11:34:02.715071  385781 main.go:141] libmachine: (ha-438604) DBG | Checking permissions on dir: /home
	I0408 11:34:02.715082  385781 main.go:141] libmachine: (ha-438604) DBG | Skipping /home - not owner
	I0408 11:34:02.716209  385781 main.go:141] libmachine: (ha-438604) define libvirt domain using xml: 
	I0408 11:34:02.716233  385781 main.go:141] libmachine: (ha-438604) <domain type='kvm'>
	I0408 11:34:02.716265  385781 main.go:141] libmachine: (ha-438604)   <name>ha-438604</name>
	I0408 11:34:02.716287  385781 main.go:141] libmachine: (ha-438604)   <memory unit='MiB'>2200</memory>
	I0408 11:34:02.716297  385781 main.go:141] libmachine: (ha-438604)   <vcpu>2</vcpu>
	I0408 11:34:02.716308  385781 main.go:141] libmachine: (ha-438604)   <features>
	I0408 11:34:02.716334  385781 main.go:141] libmachine: (ha-438604)     <acpi/>
	I0408 11:34:02.716358  385781 main.go:141] libmachine: (ha-438604)     <apic/>
	I0408 11:34:02.716365  385781 main.go:141] libmachine: (ha-438604)     <pae/>
	I0408 11:34:02.716376  385781 main.go:141] libmachine: (ha-438604)     
	I0408 11:34:02.716423  385781 main.go:141] libmachine: (ha-438604)   </features>
	I0408 11:34:02.716428  385781 main.go:141] libmachine: (ha-438604)   <cpu mode='host-passthrough'>
	I0408 11:34:02.716445  385781 main.go:141] libmachine: (ha-438604)   
	I0408 11:34:02.716459  385781 main.go:141] libmachine: (ha-438604)   </cpu>
	I0408 11:34:02.716475  385781 main.go:141] libmachine: (ha-438604)   <os>
	I0408 11:34:02.716502  385781 main.go:141] libmachine: (ha-438604)     <type>hvm</type>
	I0408 11:34:02.716511  385781 main.go:141] libmachine: (ha-438604)     <boot dev='cdrom'/>
	I0408 11:34:02.716517  385781 main.go:141] libmachine: (ha-438604)     <boot dev='hd'/>
	I0408 11:34:02.716527  385781 main.go:141] libmachine: (ha-438604)     <bootmenu enable='no'/>
	I0408 11:34:02.716534  385781 main.go:141] libmachine: (ha-438604)   </os>
	I0408 11:34:02.716547  385781 main.go:141] libmachine: (ha-438604)   <devices>
	I0408 11:34:02.716556  385781 main.go:141] libmachine: (ha-438604)     <disk type='file' device='cdrom'>
	I0408 11:34:02.716572  385781 main.go:141] libmachine: (ha-438604)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/boot2docker.iso'/>
	I0408 11:34:02.716582  385781 main.go:141] libmachine: (ha-438604)       <target dev='hdc' bus='scsi'/>
	I0408 11:34:02.716588  385781 main.go:141] libmachine: (ha-438604)       <readonly/>
	I0408 11:34:02.716595  385781 main.go:141] libmachine: (ha-438604)     </disk>
	I0408 11:34:02.716601  385781 main.go:141] libmachine: (ha-438604)     <disk type='file' device='disk'>
	I0408 11:34:02.716611  385781 main.go:141] libmachine: (ha-438604)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 11:34:02.716638  385781 main.go:141] libmachine: (ha-438604)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/ha-438604.rawdisk'/>
	I0408 11:34:02.716654  385781 main.go:141] libmachine: (ha-438604)       <target dev='hda' bus='virtio'/>
	I0408 11:34:02.716659  385781 main.go:141] libmachine: (ha-438604)     </disk>
	I0408 11:34:02.716664  385781 main.go:141] libmachine: (ha-438604)     <interface type='network'>
	I0408 11:34:02.716669  385781 main.go:141] libmachine: (ha-438604)       <source network='mk-ha-438604'/>
	I0408 11:34:02.716677  385781 main.go:141] libmachine: (ha-438604)       <model type='virtio'/>
	I0408 11:34:02.716682  385781 main.go:141] libmachine: (ha-438604)     </interface>
	I0408 11:34:02.716688  385781 main.go:141] libmachine: (ha-438604)     <interface type='network'>
	I0408 11:34:02.716694  385781 main.go:141] libmachine: (ha-438604)       <source network='default'/>
	I0408 11:34:02.716701  385781 main.go:141] libmachine: (ha-438604)       <model type='virtio'/>
	I0408 11:34:02.716706  385781 main.go:141] libmachine: (ha-438604)     </interface>
	I0408 11:34:02.716712  385781 main.go:141] libmachine: (ha-438604)     <serial type='pty'>
	I0408 11:34:02.716718  385781 main.go:141] libmachine: (ha-438604)       <target port='0'/>
	I0408 11:34:02.716727  385781 main.go:141] libmachine: (ha-438604)     </serial>
	I0408 11:34:02.716743  385781 main.go:141] libmachine: (ha-438604)     <console type='pty'>
	I0408 11:34:02.716763  385781 main.go:141] libmachine: (ha-438604)       <target type='serial' port='0'/>
	I0408 11:34:02.716783  385781 main.go:141] libmachine: (ha-438604)     </console>
	I0408 11:34:02.716793  385781 main.go:141] libmachine: (ha-438604)     <rng model='virtio'>
	I0408 11:34:02.716807  385781 main.go:141] libmachine: (ha-438604)       <backend model='random'>/dev/random</backend>
	I0408 11:34:02.716817  385781 main.go:141] libmachine: (ha-438604)     </rng>
	I0408 11:34:02.716828  385781 main.go:141] libmachine: (ha-438604)     
	I0408 11:34:02.716842  385781 main.go:141] libmachine: (ha-438604)     
	I0408 11:34:02.716855  385781 main.go:141] libmachine: (ha-438604)   </devices>
	I0408 11:34:02.716864  385781 main.go:141] libmachine: (ha-438604) </domain>
	I0408 11:34:02.716874  385781 main.go:141] libmachine: (ha-438604) 
	I0408 11:34:02.721501  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:27:b9:bb in network default
	I0408 11:34:02.722194  385781 main.go:141] libmachine: (ha-438604) Ensuring networks are active...
	I0408 11:34:02.722217  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:02.722897  385781 main.go:141] libmachine: (ha-438604) Ensuring network default is active
	I0408 11:34:02.723177  385781 main.go:141] libmachine: (ha-438604) Ensuring network mk-ha-438604 is active
	I0408 11:34:02.723799  385781 main.go:141] libmachine: (ha-438604) Getting domain xml...
	I0408 11:34:02.724605  385781 main.go:141] libmachine: (ha-438604) Creating domain...
	I0408 11:34:03.908787  385781 main.go:141] libmachine: (ha-438604) Waiting to get IP...
	I0408 11:34:03.909769  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:03.910172  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:03.910226  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:03.910159  385804 retry.go:31] will retry after 221.755655ms: waiting for machine to come up
	I0408 11:34:04.133792  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:04.134236  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:04.134269  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:04.134184  385804 retry.go:31] will retry after 322.264919ms: waiting for machine to come up
	I0408 11:34:04.457884  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:04.458279  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:04.458325  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:04.458266  385804 retry.go:31] will retry after 321.349466ms: waiting for machine to come up
	I0408 11:34:04.780692  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:04.781160  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:04.781191  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:04.781140  385804 retry.go:31] will retry after 497.855083ms: waiting for machine to come up
	I0408 11:34:05.281050  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:05.281620  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:05.281650  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:05.281557  385804 retry.go:31] will retry after 518.591769ms: waiting for machine to come up
	I0408 11:34:05.801844  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:05.802159  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:05.802192  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:05.802134  385804 retry.go:31] will retry after 931.498076ms: waiting for machine to come up
	I0408 11:34:06.735497  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:06.735980  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:06.736015  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:06.735911  385804 retry.go:31] will retry after 791.307745ms: waiting for machine to come up
	I0408 11:34:07.528758  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:07.529217  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:07.529246  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:07.529171  385804 retry.go:31] will retry after 1.221674233s: waiting for machine to come up
	I0408 11:34:08.752672  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:08.753212  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:08.753241  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:08.753149  385804 retry.go:31] will retry after 1.230439476s: waiting for machine to come up
	I0408 11:34:09.984915  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:09.985323  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:09.985352  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:09.985278  385804 retry.go:31] will retry after 2.06240866s: waiting for machine to come up
	I0408 11:34:12.050567  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:12.050969  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:12.051003  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:12.050927  385804 retry.go:31] will retry after 2.508679148s: waiting for machine to come up
	I0408 11:34:14.562492  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:14.562927  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:14.562958  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:14.562867  385804 retry.go:31] will retry after 3.244104264s: waiting for machine to come up
	I0408 11:34:17.808998  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:17.809378  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:17.809413  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:17.809318  385804 retry.go:31] will retry after 4.471776163s: waiting for machine to come up
	I0408 11:34:22.283484  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:22.283945  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find current IP address of domain ha-438604 in network mk-ha-438604
	I0408 11:34:22.283974  385781 main.go:141] libmachine: (ha-438604) DBG | I0408 11:34:22.283860  385804 retry.go:31] will retry after 5.2043868s: waiting for machine to come up
	I0408 11:34:27.490112  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.490616  385781 main.go:141] libmachine: (ha-438604) Found IP for machine: 192.168.39.99
	I0408 11:34:27.490660  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has current primary IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.490669  385781 main.go:141] libmachine: (ha-438604) Reserving static IP address...
	I0408 11:34:27.491212  385781 main.go:141] libmachine: (ha-438604) DBG | unable to find host DHCP lease matching {name: "ha-438604", mac: "52:54:00:cc:8e:55", ip: "192.168.39.99"} in network mk-ha-438604
	I0408 11:34:27.565711  385781 main.go:141] libmachine: (ha-438604) DBG | Getting to WaitForSSH function...
	I0408 11:34:27.565742  385781 main.go:141] libmachine: (ha-438604) Reserved static IP address: 192.168.39.99
	I0408 11:34:27.565755  385781 main.go:141] libmachine: (ha-438604) Waiting for SSH to be available...
	I0408 11:34:27.568333  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.568715  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:27.568749  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.568954  385781 main.go:141] libmachine: (ha-438604) DBG | Using SSH client type: external
	I0408 11:34:27.568981  385781 main.go:141] libmachine: (ha-438604) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa (-rw-------)
	I0408 11:34:27.569022  385781 main.go:141] libmachine: (ha-438604) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 11:34:27.569037  385781 main.go:141] libmachine: (ha-438604) DBG | About to run SSH command:
	I0408 11:34:27.569051  385781 main.go:141] libmachine: (ha-438604) DBG | exit 0
	I0408 11:34:27.699841  385781 main.go:141] libmachine: (ha-438604) DBG | SSH cmd err, output: <nil>: 
	I0408 11:34:27.700076  385781 main.go:141] libmachine: (ha-438604) KVM machine creation complete!
	I0408 11:34:27.700405  385781 main.go:141] libmachine: (ha-438604) Calling .GetConfigRaw
	I0408 11:34:27.701078  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:27.701295  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:27.701514  385781 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 11:34:27.701540  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:34:27.702881  385781 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 11:34:27.702912  385781 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 11:34:27.702922  385781 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 11:34:27.702939  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:27.705333  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.705694  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:27.705747  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.705871  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:27.706086  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.706237  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.706409  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:27.706542  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:27.706805  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:27.706820  385781 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 11:34:27.823493  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:34:27.823520  385781 main.go:141] libmachine: Detecting the provisioner...
	I0408 11:34:27.823532  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:27.826476  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.826874  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:27.826909  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.827113  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:27.827370  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.827609  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.827776  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:27.827999  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:27.828195  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:27.828207  385781 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 11:34:27.945439  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 11:34:27.945540  385781 main.go:141] libmachine: found compatible host: buildroot
	I0408 11:34:27.945548  385781 main.go:141] libmachine: Provisioning with buildroot...
	I0408 11:34:27.945556  385781 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:34:27.945884  385781 buildroot.go:166] provisioning hostname "ha-438604"
	I0408 11:34:27.945925  385781 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:34:27.946183  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:27.949131  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.949519  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:27.949570  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:27.949637  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:27.949858  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.950020  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:27.950183  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:27.950330  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:27.950563  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:27.950578  385781 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-438604 && echo "ha-438604" | sudo tee /etc/hostname
	I0408 11:34:28.080893  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604
	
	I0408 11:34:28.080931  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.084090  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.084520  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.084564  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.084756  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:28.085024  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.085232  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.085420  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:28.085602  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:28.085827  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:28.085847  385781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-438604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-438604/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-438604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:34:28.210052  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:34:28.210093  385781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:34:28.210125  385781 buildroot.go:174] setting up certificates
	I0408 11:34:28.210140  385781 provision.go:84] configureAuth start
	I0408 11:34:28.210151  385781 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:34:28.210475  385781 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:34:28.212972  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.213319  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.213340  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.213549  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.215880  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.216321  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.216352  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.216464  385781 provision.go:143] copyHostCerts
	I0408 11:34:28.216501  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:34:28.216546  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 11:34:28.216570  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:34:28.216654  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:34:28.216795  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:34:28.216824  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 11:34:28.216833  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:34:28.216877  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:34:28.216972  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:34:28.216999  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 11:34:28.217016  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:34:28.217055  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:34:28.217140  385781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.ha-438604 san=[127.0.0.1 192.168.39.99 ha-438604 localhost minikube]
	I0408 11:34:28.485726  385781 provision.go:177] copyRemoteCerts
	I0408 11:34:28.485798  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:34:28.485831  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.488756  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.489078  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.489112  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.489244  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:28.489499  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.489693  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:28.489901  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:34:28.578890  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 11:34:28.579029  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:34:28.604442  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 11:34:28.604536  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0408 11:34:28.630287  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 11:34:28.630383  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 11:34:28.657271  385781 provision.go:87] duration metric: took 447.117011ms to configureAuth
	I0408 11:34:28.657307  385781 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:34:28.657478  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:34:28.657572  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.660193  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.660540  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.660570  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.660702  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:28.660919  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.661109  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.661223  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:28.661428  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:28.661601  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:28.661616  385781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:34:28.950417  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:34:28.950448  385781 main.go:141] libmachine: Checking connection to Docker...
	I0408 11:34:28.950472  385781 main.go:141] libmachine: (ha-438604) Calling .GetURL
	I0408 11:34:28.951731  385781 main.go:141] libmachine: (ha-438604) DBG | Using libvirt version 6000000
	I0408 11:34:28.954061  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.954343  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.954371  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.954566  385781 main.go:141] libmachine: Docker is up and running!
	I0408 11:34:28.954581  385781 main.go:141] libmachine: Reticulating splines...
	I0408 11:34:28.954587  385781 client.go:171] duration metric: took 26.791382418s to LocalClient.Create
	I0408 11:34:28.954607  385781 start.go:167] duration metric: took 26.791450949s to libmachine.API.Create "ha-438604"
	I0408 11:34:28.954616  385781 start.go:293] postStartSetup for "ha-438604" (driver="kvm2")
	I0408 11:34:28.954627  385781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:34:28.954644  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:28.954883  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:34:28.954907  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:28.957054  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.957381  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:28.957407  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:28.957548  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:28.957736  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:28.957911  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:28.958098  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:34:29.046380  385781 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:34:29.050754  385781 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:34:29.050781  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:34:29.050868  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:34:29.050983  385781 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 11:34:29.051000  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 11:34:29.051127  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 11:34:29.060928  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:34:29.086596  385781 start.go:296] duration metric: took 131.963029ms for postStartSetup
	I0408 11:34:29.086656  385781 main.go:141] libmachine: (ha-438604) Calling .GetConfigRaw
	I0408 11:34:29.087277  385781 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:34:29.090168  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.090524  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.090550  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.090888  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:34:29.091133  385781 start.go:128] duration metric: took 26.948707881s to createHost
	I0408 11:34:29.091165  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:29.093225  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.093582  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.093618  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.093707  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:29.093931  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:29.094076  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:29.094207  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:29.094334  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:34:29.094577  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:34:29.094598  385781 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:34:29.208766  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576069.185207871
	
	I0408 11:34:29.208800  385781 fix.go:216] guest clock: 1712576069.185207871
	I0408 11:34:29.208812  385781 fix.go:229] Guest: 2024-04-08 11:34:29.185207871 +0000 UTC Remote: 2024-04-08 11:34:29.091150036 +0000 UTC m=+27.074198880 (delta=94.057835ms)
	I0408 11:34:29.208845  385781 fix.go:200] guest clock delta is within tolerance: 94.057835ms
	I0408 11:34:29.208852  385781 start.go:83] releasing machines lock for "ha-438604", held for 27.066494886s
	I0408 11:34:29.208879  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:29.209176  385781 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:34:29.212055  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.212432  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.212468  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.212652  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:29.213176  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:29.213342  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:34:29.213435  385781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:34:29.213478  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:29.213601  385781 ssh_runner.go:195] Run: cat /version.json
	I0408 11:34:29.213628  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:34:29.215878  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.216170  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.216204  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.216224  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.216343  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:29.216532  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:29.216552  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:29.216581  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:29.216713  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:29.216733  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:34:29.216862  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:34:29.216915  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:34:29.217052  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:34:29.217172  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:34:29.297222  385781 ssh_runner.go:195] Run: systemctl --version
	I0408 11:34:29.334723  385781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:34:29.495371  385781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:34:29.501487  385781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:34:29.501582  385781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:34:29.519326  385781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 11:34:29.519359  385781 start.go:494] detecting cgroup driver to use...
	I0408 11:34:29.519440  385781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:34:29.535461  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:34:29.550170  385781 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:34:29.550244  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:34:29.564867  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:34:29.579761  385781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:34:29.700044  385781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:34:29.837416  385781 docker.go:233] disabling docker service ...
	I0408 11:34:29.837504  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:34:29.853404  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:34:29.867589  385781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:34:30.001145  385781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:34:30.133223  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:34:30.154686  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:34:30.175031  385781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:34:30.175099  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.186271  385781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:34:30.186343  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.197564  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.208654  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.219799  385781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:34:30.231167  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.242152  385781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.260537  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:34:30.272206  385781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:34:30.282884  385781 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 11:34:30.282948  385781 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 11:34:30.297985  385781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:34:30.307978  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:34:30.435831  385781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:34:30.585561  385781 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:34:30.585654  385781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:34:30.590971  385781 start.go:562] Will wait 60s for crictl version
	I0408 11:34:30.591057  385781 ssh_runner.go:195] Run: which crictl
	I0408 11:34:30.595229  385781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:34:30.635555  385781 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:34:30.635668  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:34:30.665322  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:34:30.698754  385781 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:34:30.700339  385781 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:34:30.703208  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:30.703583  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:34:30.703611  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:34:30.704027  385781 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:34:30.708371  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:34:30.722166  385781 kubeadm.go:877] updating cluster {Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 11:34:30.722272  385781 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:34:30.722321  385781 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:34:30.760155  385781 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 11:34:30.760232  385781 ssh_runner.go:195] Run: which lz4
	I0408 11:34:30.764524  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0408 11:34:30.764628  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 11:34:30.769065  385781 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 11:34:30.769097  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 11:34:32.282117  385781 crio.go:462] duration metric: took 1.517509219s to copy over tarball
	I0408 11:34:32.282195  385781 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 11:34:34.601298  385781 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.319070915s)
	I0408 11:34:34.601338  385781 crio.go:469] duration metric: took 2.319186776s to extract the tarball
	I0408 11:34:34.601360  385781 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 11:34:34.640264  385781 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:34:34.692273  385781 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 11:34:34.692300  385781 cache_images.go:84] Images are preloaded, skipping loading
	I0408 11:34:34.692309  385781 kubeadm.go:928] updating node { 192.168.39.99 8443 v1.29.3 crio true true} ...
	I0408 11:34:34.692463  385781 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-438604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:34:34.692541  385781 ssh_runner.go:195] Run: crio config
	I0408 11:34:34.747951  385781 cni.go:84] Creating CNI manager for ""
	I0408 11:34:34.747975  385781 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 11:34:34.747986  385781 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 11:34:34.748008  385781 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-438604 NodeName:ha-438604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 11:34:34.748174  385781 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-438604"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 11:34:34.748205  385781 kube-vip.go:111] generating kube-vip config ...
	I0408 11:34:34.748249  385781 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 11:34:34.766601  385781 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0408 11:34:34.766743  385781 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0408 11:34:34.766813  385781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:34:34.778696  385781 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 11:34:34.778797  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0408 11:34:34.789620  385781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0408 11:34:34.808244  385781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:34:34.827056  385781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0408 11:34:34.845314  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0408 11:34:34.863846  385781 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0408 11:34:34.868386  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:34:34.882352  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:34:35.026177  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:34:35.045651  385781 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604 for IP: 192.168.39.99
	I0408 11:34:35.045678  385781 certs.go:194] generating shared ca certs ...
	I0408 11:34:35.045722  385781 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.045914  385781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:34:35.045984  385781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:34:35.046001  385781 certs.go:256] generating profile certs ...
	I0408 11:34:35.046078  385781 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key
	I0408 11:34:35.046117  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt with IP's: []
	I0408 11:34:35.413373  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt ...
	I0408 11:34:35.413422  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt: {Name:mk3b7e649553e94d1cd8e4133ae9117a1d5de74d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.413656  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key ...
	I0408 11:34:35.413676  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key: {Name:mk319d1da2826da2f55614b44acfb24a5466deec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.413799  385781 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.031d76e7
	I0408 11:34:35.413820  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.031d76e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.254]
	I0408 11:34:35.754130  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.031d76e7 ...
	I0408 11:34:35.754176  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.031d76e7: {Name:mka209038fcbc41dcf872a310f70eacfb93fd5c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.754349  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.031d76e7 ...
	I0408 11:34:35.754365  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.031d76e7: {Name:mk2c706c31402acdb212b4716cccdea007e4227c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.754435  385781 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.031d76e7 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt
	I0408 11:34:35.754508  385781 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.031d76e7 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key
	I0408 11:34:35.754559  385781 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key
	I0408 11:34:35.754574  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt with IP's: []
	I0408 11:34:35.917959  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt ...
	I0408 11:34:35.917997  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt: {Name:mk7f0e598c497fabd4116cfe31d470b2ad37afd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.918151  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key ...
	I0408 11:34:35.918164  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key: {Name:mk6bdcbb968ca03ea6fe017bc03bc5094402d346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:34:35.918236  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 11:34:35.918254  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 11:34:35.918264  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 11:34:35.918277  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 11:34:35.918289  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 11:34:35.918302  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 11:34:35.918315  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 11:34:35.918326  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 11:34:35.918380  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 11:34:35.918416  385781 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 11:34:35.918423  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:34:35.918444  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:34:35.918463  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:34:35.918480  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:34:35.918516  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:34:35.918541  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 11:34:35.918571  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:34:35.918599  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 11:34:35.919208  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:34:35.951404  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:34:35.985713  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:34:36.014075  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:34:36.047144  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 11:34:36.076634  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 11:34:36.105534  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:34:36.132785  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:34:36.161373  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 11:34:36.189785  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:34:36.217551  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 11:34:36.245896  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 11:34:36.265595  385781 ssh_runner.go:195] Run: openssl version
	I0408 11:34:36.272414  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:34:36.284650  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:34:36.290307  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:34:36.290387  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:34:36.296816  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:34:36.309837  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 11:34:36.321591  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 11:34:36.326829  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 11:34:36.326907  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 11:34:36.333615  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 11:34:36.346280  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 11:34:36.359487  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 11:34:36.365862  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 11:34:36.365971  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 11:34:36.373862  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 11:34:36.386273  385781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:34:36.391001  385781 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 11:34:36.391080  385781 kubeadm.go:391] StartCluster: {Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:34:36.391237  385781 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 11:34:36.391292  385781 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 11:34:36.434257  385781 cri.go:89] found id: ""
	I0408 11:34:36.434342  385781 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 11:34:36.445873  385781 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 11:34:36.456984  385781 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 11:34:36.468015  385781 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 11:34:36.468049  385781 kubeadm.go:156] found existing configuration files:
	
	I0408 11:34:36.468167  385781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 11:34:36.478796  385781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 11:34:36.478893  385781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 11:34:36.490758  385781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 11:34:36.502697  385781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 11:34:36.502791  385781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 11:34:36.513956  385781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 11:34:36.530576  385781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 11:34:36.530657  385781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 11:34:36.541885  385781 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 11:34:36.552304  385781 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 11:34:36.552378  385781 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 11:34:36.563246  385781 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 11:34:36.858032  385781 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 11:34:48.354356  385781 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 11:34:48.354447  385781 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 11:34:48.354553  385781 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 11:34:48.354646  385781 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 11:34:48.354736  385781 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 11:34:48.354791  385781 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 11:34:48.356495  385781 out.go:204]   - Generating certificates and keys ...
	I0408 11:34:48.356592  385781 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 11:34:48.356651  385781 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 11:34:48.356757  385781 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 11:34:48.356839  385781 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 11:34:48.356912  385781 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 11:34:48.356993  385781 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 11:34:48.357114  385781 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 11:34:48.357255  385781 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-438604 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I0408 11:34:48.357335  385781 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 11:34:48.357500  385781 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-438604 localhost] and IPs [192.168.39.99 127.0.0.1 ::1]
	I0408 11:34:48.357591  385781 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 11:34:48.357685  385781 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 11:34:48.357738  385781 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 11:34:48.357811  385781 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 11:34:48.357885  385781 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 11:34:48.357958  385781 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 11:34:48.358033  385781 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 11:34:48.358132  385781 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 11:34:48.358240  385781 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 11:34:48.358344  385781 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 11:34:48.358436  385781 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 11:34:48.361023  385781 out.go:204]   - Booting up control plane ...
	I0408 11:34:48.361136  385781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 11:34:48.361230  385781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 11:34:48.361309  385781 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 11:34:48.361445  385781 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 11:34:48.361597  385781 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 11:34:48.361656  385781 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 11:34:48.361820  385781 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 11:34:48.361924  385781 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.562122 seconds
	I0408 11:34:48.362018  385781 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 11:34:48.362156  385781 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 11:34:48.362234  385781 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 11:34:48.362383  385781 kubeadm.go:309] [mark-control-plane] Marking the node ha-438604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 11:34:48.362431  385781 kubeadm.go:309] [bootstrap-token] Using token: u4tba0.5qhrqha5k5ry6q7a
	I0408 11:34:48.364067  385781 out.go:204]   - Configuring RBAC rules ...
	I0408 11:34:48.364189  385781 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 11:34:48.364268  385781 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 11:34:48.364410  385781 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 11:34:48.364560  385781 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 11:34:48.364663  385781 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 11:34:48.364763  385781 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 11:34:48.364889  385781 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 11:34:48.364933  385781 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 11:34:48.364972  385781 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 11:34:48.364978  385781 kubeadm.go:309] 
	I0408 11:34:48.365029  385781 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 11:34:48.365035  385781 kubeadm.go:309] 
	I0408 11:34:48.365096  385781 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 11:34:48.365102  385781 kubeadm.go:309] 
	I0408 11:34:48.365123  385781 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 11:34:48.365174  385781 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 11:34:48.365221  385781 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 11:34:48.365228  385781 kubeadm.go:309] 
	I0408 11:34:48.365283  385781 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 11:34:48.365290  385781 kubeadm.go:309] 
	I0408 11:34:48.365333  385781 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 11:34:48.365339  385781 kubeadm.go:309] 
	I0408 11:34:48.365380  385781 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 11:34:48.365446  385781 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 11:34:48.365518  385781 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 11:34:48.365525  385781 kubeadm.go:309] 
	I0408 11:34:48.365594  385781 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 11:34:48.365661  385781 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 11:34:48.365667  385781 kubeadm.go:309] 
	I0408 11:34:48.365741  385781 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token u4tba0.5qhrqha5k5ry6q7a \
	I0408 11:34:48.365861  385781 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 11:34:48.365908  385781 kubeadm.go:309] 	--control-plane 
	I0408 11:34:48.365918  385781 kubeadm.go:309] 
	I0408 11:34:48.366019  385781 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 11:34:48.366029  385781 kubeadm.go:309] 
	I0408 11:34:48.366124  385781 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token u4tba0.5qhrqha5k5ry6q7a \
	I0408 11:34:48.366275  385781 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 11:34:48.366291  385781 cni.go:84] Creating CNI manager for ""
	I0408 11:34:48.366298  385781 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 11:34:48.369157  385781 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0408 11:34:48.370626  385781 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0408 11:34:48.381241  385781 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0408 11:34:48.381271  385781 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0408 11:34:48.452826  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0408 11:34:48.917927  385781 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 11:34:48.918035  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:48.918033  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-438604 minikube.k8s.io/updated_at=2024_04_08T11_34_48_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=ha-438604 minikube.k8s.io/primary=true
	I0408 11:34:49.066865  385781 ops.go:34] apiserver oom_adj: -16
	I0408 11:34:49.067087  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:49.567180  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:50.067069  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:50.567083  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:51.067823  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:51.567813  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:52.067770  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:52.567206  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:53.067114  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:53.567893  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:54.067938  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:54.568071  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:55.067135  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:55.567949  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:56.067099  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:56.567925  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:57.067066  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:57.568107  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:58.068083  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:58.568062  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:59.067573  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:34:59.567185  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:35:00.067869  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 11:35:00.182385  385781 kubeadm.go:1107] duration metric: took 11.264461996s to wait for elevateKubeSystemPrivileges
	W0408 11:35:00.182426  385781 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 11:35:00.182434  385781 kubeadm.go:393] duration metric: took 23.791361403s to StartCluster
	I0408 11:35:00.182453  385781 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:00.182543  385781 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:35:00.183344  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:00.183589  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 11:35:00.183601  385781 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:35:00.183628  385781 start.go:240] waiting for startup goroutines ...
	I0408 11:35:00.183638  385781 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 11:35:00.183732  385781 addons.go:69] Setting storage-provisioner=true in profile "ha-438604"
	I0408 11:35:00.183741  385781 addons.go:69] Setting default-storageclass=true in profile "ha-438604"
	I0408 11:35:00.183774  385781 addons.go:234] Setting addon storage-provisioner=true in "ha-438604"
	I0408 11:35:00.183789  385781 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-438604"
	I0408 11:35:00.183812  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:35:00.183838  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:35:00.184267  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.184318  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.184337  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.184346  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.200332  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0408 11:35:00.200350  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
	I0408 11:35:00.200827  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.200907  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.201440  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.201468  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.201499  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.201521  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.201830  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.201938  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.202173  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:35:00.202469  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.202502  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.204679  385781 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:35:00.205057  385781 kapi.go:59] client config for ha-438604: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt", KeyFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key", CAFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5db80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 11:35:00.205626  385781 cert_rotation.go:137] Starting client certificate rotation controller
	I0408 11:35:00.205976  385781 addons.go:234] Setting addon default-storageclass=true in "ha-438604"
	I0408 11:35:00.206027  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:35:00.206426  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.206465  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.220086  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0408 11:35:00.220600  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.221285  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.221318  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.221738  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.221994  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:35:00.223349  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0408 11:35:00.223736  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:35:00.223952  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.226265  385781 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 11:35:00.224435  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.226325  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.226781  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.227873  385781 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 11:35:00.227895  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 11:35:00.227921  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:35:00.228350  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:00.228379  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:00.231520  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:00.232067  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:35:00.232100  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:00.232401  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:35:00.232621  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:35:00.232894  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:35:00.233158  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:35:00.245841  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0408 11:35:00.246406  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:00.247061  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:00.247085  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:00.247451  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:00.247720  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:35:00.249556  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:35:00.249900  385781 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 11:35:00.249917  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 11:35:00.249946  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:35:00.252598  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:00.253036  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:35:00.253064  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:00.253198  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:35:00.253399  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:35:00.253575  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:35:00.253732  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:35:00.394744  385781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 11:35:00.396448  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 11:35:00.406682  385781 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 11:35:01.102749  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.102783  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.102798  385781 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0408 11:35:01.102901  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.102925  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.103100  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.103139  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.103160  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.103255  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.103288  385781 main.go:141] libmachine: (ha-438604) DBG | Closing plugin on server side
	I0408 11:35:01.103306  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.103360  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.103372  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.103383  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.103473  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.103552  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.103552  385781 main.go:141] libmachine: (ha-438604) DBG | Closing plugin on server side
	I0408 11:35:01.103660  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.103675  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.103814  385781 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0408 11:35:01.103824  385781 round_trippers.go:469] Request Headers:
	I0408 11:35:01.103833  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:35:01.103838  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:35:01.115283  385781 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0408 11:35:01.116011  385781 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0408 11:35:01.116029  385781 round_trippers.go:469] Request Headers:
	I0408 11:35:01.116036  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:35:01.116039  385781 round_trippers.go:473]     Content-Type: application/json
	I0408 11:35:01.116042  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:35:01.119042  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:35:01.119194  385781 main.go:141] libmachine: Making call to close driver server
	I0408 11:35:01.119207  385781 main.go:141] libmachine: (ha-438604) Calling .Close
	I0408 11:35:01.119533  385781 main.go:141] libmachine: Successfully made call to close driver server
	I0408 11:35:01.119555  385781 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 11:35:01.121499  385781 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0408 11:35:01.122818  385781 addons.go:505] duration metric: took 939.178087ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0408 11:35:01.122859  385781 start.go:245] waiting for cluster config update ...
	I0408 11:35:01.122872  385781 start.go:254] writing updated cluster config ...
	I0408 11:35:01.124696  385781 out.go:177] 
	I0408 11:35:01.126381  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:35:01.126462  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:35:01.128231  385781 out.go:177] * Starting "ha-438604-m02" control-plane node in "ha-438604" cluster
	I0408 11:35:01.129662  385781 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:35:01.129691  385781 cache.go:56] Caching tarball of preloaded images
	I0408 11:35:01.129771  385781 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:35:01.129784  385781 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:35:01.129858  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:35:01.130021  385781 start.go:360] acquireMachinesLock for ha-438604-m02: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:35:01.130063  385781 start.go:364] duration metric: took 22.772µs to acquireMachinesLock for "ha-438604-m02"
	I0408 11:35:01.130080  385781 start.go:93] Provisioning new machine with config: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:35:01.130139  385781 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0408 11:35:01.132021  385781 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:35:01.132114  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:01.132139  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:01.147409  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46277
	I0408 11:35:01.148043  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:01.148509  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:01.148536  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:01.148887  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:01.149103  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetMachineName
	I0408 11:35:01.149232  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:01.149418  385781 start.go:159] libmachine.API.Create for "ha-438604" (driver="kvm2")
	I0408 11:35:01.149446  385781 client.go:168] LocalClient.Create starting
	I0408 11:35:01.149487  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 11:35:01.149527  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:35:01.149553  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:35:01.149608  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 11:35:01.149627  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:35:01.149638  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:35:01.149651  385781 main.go:141] libmachine: Running pre-create checks...
	I0408 11:35:01.149659  385781 main.go:141] libmachine: (ha-438604-m02) Calling .PreCreateCheck
	I0408 11:35:01.149830  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetConfigRaw
	I0408 11:35:01.150253  385781 main.go:141] libmachine: Creating machine...
	I0408 11:35:01.150270  385781 main.go:141] libmachine: (ha-438604-m02) Calling .Create
	I0408 11:35:01.150418  385781 main.go:141] libmachine: (ha-438604-m02) Creating KVM machine...
	I0408 11:35:01.151655  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found existing default KVM network
	I0408 11:35:01.151839  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found existing private KVM network mk-ha-438604
	I0408 11:35:01.151976  385781 main.go:141] libmachine: (ha-438604-m02) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02 ...
	I0408 11:35:01.152002  385781 main.go:141] libmachine: (ha-438604-m02) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:35:01.152042  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:01.151946  386193 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:35:01.152146  385781 main.go:141] libmachine: (ha-438604-m02) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 11:35:01.392132  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:01.392002  386193 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa...
	I0408 11:35:01.681870  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:01.681715  386193 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/ha-438604-m02.rawdisk...
	I0408 11:35:01.681916  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Writing magic tar header
	I0408 11:35:01.681930  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Writing SSH key tar header
	I0408 11:35:01.681943  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:01.681843  386193 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02 ...
	I0408 11:35:01.681959  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02
	I0408 11:35:01.682049  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02 (perms=drwx------)
	I0408 11:35:01.682081  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 11:35:01.682092  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 11:35:01.682108  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 11:35:01.682122  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:35:01.682133  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 11:35:01.682148  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 11:35:01.682159  385781 main.go:141] libmachine: (ha-438604-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 11:35:01.682170  385781 main.go:141] libmachine: (ha-438604-m02) Creating domain...
	I0408 11:35:01.682185  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 11:35:01.682201  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 11:35:01.682215  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home/jenkins
	I0408 11:35:01.682227  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Checking permissions on dir: /home
	I0408 11:35:01.682237  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Skipping /home - not owner
	I0408 11:35:01.683239  385781 main.go:141] libmachine: (ha-438604-m02) define libvirt domain using xml: 
	I0408 11:35:01.683263  385781 main.go:141] libmachine: (ha-438604-m02) <domain type='kvm'>
	I0408 11:35:01.683271  385781 main.go:141] libmachine: (ha-438604-m02)   <name>ha-438604-m02</name>
	I0408 11:35:01.683276  385781 main.go:141] libmachine: (ha-438604-m02)   <memory unit='MiB'>2200</memory>
	I0408 11:35:01.683282  385781 main.go:141] libmachine: (ha-438604-m02)   <vcpu>2</vcpu>
	I0408 11:35:01.683286  385781 main.go:141] libmachine: (ha-438604-m02)   <features>
	I0408 11:35:01.683291  385781 main.go:141] libmachine: (ha-438604-m02)     <acpi/>
	I0408 11:35:01.683296  385781 main.go:141] libmachine: (ha-438604-m02)     <apic/>
	I0408 11:35:01.683301  385781 main.go:141] libmachine: (ha-438604-m02)     <pae/>
	I0408 11:35:01.683305  385781 main.go:141] libmachine: (ha-438604-m02)     
	I0408 11:35:01.683316  385781 main.go:141] libmachine: (ha-438604-m02)   </features>
	I0408 11:35:01.683336  385781 main.go:141] libmachine: (ha-438604-m02)   <cpu mode='host-passthrough'>
	I0408 11:35:01.683346  385781 main.go:141] libmachine: (ha-438604-m02)   
	I0408 11:35:01.683357  385781 main.go:141] libmachine: (ha-438604-m02)   </cpu>
	I0408 11:35:01.683378  385781 main.go:141] libmachine: (ha-438604-m02)   <os>
	I0408 11:35:01.683400  385781 main.go:141] libmachine: (ha-438604-m02)     <type>hvm</type>
	I0408 11:35:01.683415  385781 main.go:141] libmachine: (ha-438604-m02)     <boot dev='cdrom'/>
	I0408 11:35:01.683422  385781 main.go:141] libmachine: (ha-438604-m02)     <boot dev='hd'/>
	I0408 11:35:01.683432  385781 main.go:141] libmachine: (ha-438604-m02)     <bootmenu enable='no'/>
	I0408 11:35:01.683446  385781 main.go:141] libmachine: (ha-438604-m02)   </os>
	I0408 11:35:01.683458  385781 main.go:141] libmachine: (ha-438604-m02)   <devices>
	I0408 11:35:01.683470  385781 main.go:141] libmachine: (ha-438604-m02)     <disk type='file' device='cdrom'>
	I0408 11:35:01.683491  385781 main.go:141] libmachine: (ha-438604-m02)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/boot2docker.iso'/>
	I0408 11:35:01.683506  385781 main.go:141] libmachine: (ha-438604-m02)       <target dev='hdc' bus='scsi'/>
	I0408 11:35:01.683519  385781 main.go:141] libmachine: (ha-438604-m02)       <readonly/>
	I0408 11:35:01.683529  385781 main.go:141] libmachine: (ha-438604-m02)     </disk>
	I0408 11:35:01.683539  385781 main.go:141] libmachine: (ha-438604-m02)     <disk type='file' device='disk'>
	I0408 11:35:01.683551  385781 main.go:141] libmachine: (ha-438604-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 11:35:01.683568  385781 main.go:141] libmachine: (ha-438604-m02)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/ha-438604-m02.rawdisk'/>
	I0408 11:35:01.683583  385781 main.go:141] libmachine: (ha-438604-m02)       <target dev='hda' bus='virtio'/>
	I0408 11:35:01.683596  385781 main.go:141] libmachine: (ha-438604-m02)     </disk>
	I0408 11:35:01.683607  385781 main.go:141] libmachine: (ha-438604-m02)     <interface type='network'>
	I0408 11:35:01.683620  385781 main.go:141] libmachine: (ha-438604-m02)       <source network='mk-ha-438604'/>
	I0408 11:35:01.683630  385781 main.go:141] libmachine: (ha-438604-m02)       <model type='virtio'/>
	I0408 11:35:01.683638  385781 main.go:141] libmachine: (ha-438604-m02)     </interface>
	I0408 11:35:01.683649  385781 main.go:141] libmachine: (ha-438604-m02)     <interface type='network'>
	I0408 11:35:01.683661  385781 main.go:141] libmachine: (ha-438604-m02)       <source network='default'/>
	I0408 11:35:01.683670  385781 main.go:141] libmachine: (ha-438604-m02)       <model type='virtio'/>
	I0408 11:35:01.683680  385781 main.go:141] libmachine: (ha-438604-m02)     </interface>
	I0408 11:35:01.683703  385781 main.go:141] libmachine: (ha-438604-m02)     <serial type='pty'>
	I0408 11:35:01.683715  385781 main.go:141] libmachine: (ha-438604-m02)       <target port='0'/>
	I0408 11:35:01.683727  385781 main.go:141] libmachine: (ha-438604-m02)     </serial>
	I0408 11:35:01.683737  385781 main.go:141] libmachine: (ha-438604-m02)     <console type='pty'>
	I0408 11:35:01.683750  385781 main.go:141] libmachine: (ha-438604-m02)       <target type='serial' port='0'/>
	I0408 11:35:01.683763  385781 main.go:141] libmachine: (ha-438604-m02)     </console>
	I0408 11:35:01.683798  385781 main.go:141] libmachine: (ha-438604-m02)     <rng model='virtio'>
	I0408 11:35:01.683825  385781 main.go:141] libmachine: (ha-438604-m02)       <backend model='random'>/dev/random</backend>
	I0408 11:35:01.683836  385781 main.go:141] libmachine: (ha-438604-m02)     </rng>
	I0408 11:35:01.683846  385781 main.go:141] libmachine: (ha-438604-m02)     
	I0408 11:35:01.683857  385781 main.go:141] libmachine: (ha-438604-m02)     
	I0408 11:35:01.683863  385781 main.go:141] libmachine: (ha-438604-m02)   </devices>
	I0408 11:35:01.683894  385781 main.go:141] libmachine: (ha-438604-m02) </domain>
	I0408 11:35:01.683911  385781 main.go:141] libmachine: (ha-438604-m02) 
	I0408 11:35:01.690956  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:23:75:de in network default
	I0408 11:35:01.691632  385781 main.go:141] libmachine: (ha-438604-m02) Ensuring networks are active...
	I0408 11:35:01.691663  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:01.692420  385781 main.go:141] libmachine: (ha-438604-m02) Ensuring network default is active
	I0408 11:35:01.692663  385781 main.go:141] libmachine: (ha-438604-m02) Ensuring network mk-ha-438604 is active
	I0408 11:35:01.693031  385781 main.go:141] libmachine: (ha-438604-m02) Getting domain xml...
	I0408 11:35:01.693796  385781 main.go:141] libmachine: (ha-438604-m02) Creating domain...
	I0408 11:35:02.958387  385781 main.go:141] libmachine: (ha-438604-m02) Waiting to get IP...
	I0408 11:35:02.959455  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:02.959978  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:02.960023  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:02.959949  386193 retry.go:31] will retry after 261.150221ms: waiting for machine to come up
	I0408 11:35:03.222433  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:03.222924  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:03.222948  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:03.222873  386193 retry.go:31] will retry after 338.774375ms: waiting for machine to come up
	I0408 11:35:03.563954  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:03.564602  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:03.564631  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:03.564553  386193 retry.go:31] will retry after 443.047947ms: waiting for machine to come up
	I0408 11:35:04.009061  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:04.009635  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:04.009666  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:04.009556  386193 retry.go:31] will retry after 435.72415ms: waiting for machine to come up
	I0408 11:35:04.447396  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:04.447952  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:04.447991  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:04.447873  386193 retry.go:31] will retry after 565.812097ms: waiting for machine to come up
	I0408 11:35:05.015745  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:05.016316  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:05.016374  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:05.016267  386193 retry.go:31] will retry after 728.831545ms: waiting for machine to come up
	I0408 11:35:05.746267  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:05.746722  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:05.746747  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:05.746684  386193 retry.go:31] will retry after 883.417203ms: waiting for machine to come up
	I0408 11:35:06.632192  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:06.632711  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:06.632752  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:06.632653  386193 retry.go:31] will retry after 1.443827675s: waiting for machine to come up
	I0408 11:35:08.078256  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:08.078710  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:08.078743  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:08.078693  386193 retry.go:31] will retry after 1.582710551s: waiting for machine to come up
	I0408 11:35:09.663511  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:09.664043  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:09.664087  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:09.663968  386193 retry.go:31] will retry after 1.808371147s: waiting for machine to come up
	I0408 11:35:11.474372  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:11.474814  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:11.474841  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:11.474752  386193 retry.go:31] will retry after 2.023384632s: waiting for machine to come up
	I0408 11:35:13.500588  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:13.501181  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:13.501208  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:13.501125  386193 retry.go:31] will retry after 2.843950856s: waiting for machine to come up
	I0408 11:35:16.347031  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:16.347506  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:16.347537  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:16.347467  386193 retry.go:31] will retry after 3.702430785s: waiting for machine to come up
	I0408 11:35:20.051340  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:20.051762  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find current IP address of domain ha-438604-m02 in network mk-ha-438604
	I0408 11:35:20.051824  385781 main.go:141] libmachine: (ha-438604-m02) DBG | I0408 11:35:20.051746  386193 retry.go:31] will retry after 3.602659027s: waiting for machine to come up
	I0408 11:35:23.657430  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.658004  385781 main.go:141] libmachine: (ha-438604-m02) Found IP for machine: 192.168.39.219
	I0408 11:35:23.658029  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has current primary IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.658036  385781 main.go:141] libmachine: (ha-438604-m02) Reserving static IP address...
	I0408 11:35:23.658598  385781 main.go:141] libmachine: (ha-438604-m02) DBG | unable to find host DHCP lease matching {name: "ha-438604-m02", mac: "52:54:00:b9:2b:19", ip: "192.168.39.219"} in network mk-ha-438604
	I0408 11:35:23.735106  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Getting to WaitForSSH function...
	I0408 11:35:23.735145  385781 main.go:141] libmachine: (ha-438604-m02) Reserved static IP address: 192.168.39.219
	I0408 11:35:23.735159  385781 main.go:141] libmachine: (ha-438604-m02) Waiting for SSH to be available...
	I0408 11:35:23.738077  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.738536  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:23.738569  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.738646  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Using SSH client type: external
	I0408 11:35:23.738695  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa (-rw-------)
	I0408 11:35:23.738734  385781 main.go:141] libmachine: (ha-438604-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.219 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 11:35:23.738757  385781 main.go:141] libmachine: (ha-438604-m02) DBG | About to run SSH command:
	I0408 11:35:23.738843  385781 main.go:141] libmachine: (ha-438604-m02) DBG | exit 0
	I0408 11:35:23.867834  385781 main.go:141] libmachine: (ha-438604-m02) DBG | SSH cmd err, output: <nil>: 
	I0408 11:35:23.868060  385781 main.go:141] libmachine: (ha-438604-m02) KVM machine creation complete!
	I0408 11:35:23.868417  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetConfigRaw
	I0408 11:35:23.868987  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:23.869221  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:23.869442  385781 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 11:35:23.869458  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:35:23.870907  385781 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 11:35:23.870922  385781 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 11:35:23.870929  385781 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 11:35:23.870939  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:23.873288  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.873752  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:23.873781  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.873904  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:23.874122  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:23.874287  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:23.874431  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:23.874618  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:23.874915  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:23.874936  385781 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 11:35:23.987508  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:35:23.987555  385781 main.go:141] libmachine: Detecting the provisioner...
	I0408 11:35:23.987567  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:23.990503  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.990872  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:23.990902  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:23.991019  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:23.991290  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:23.991498  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:23.991673  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:23.991936  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:23.992170  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:23.992184  385781 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 11:35:24.104835  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 11:35:24.104924  385781 main.go:141] libmachine: found compatible host: buildroot
	I0408 11:35:24.104935  385781 main.go:141] libmachine: Provisioning with buildroot...
	I0408 11:35:24.104947  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetMachineName
	I0408 11:35:24.105225  385781 buildroot.go:166] provisioning hostname "ha-438604-m02"
	I0408 11:35:24.105261  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetMachineName
	I0408 11:35:24.105530  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.108554  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.108963  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.108996  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.109111  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:24.109348  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.109545  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.109754  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:24.109975  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:24.110195  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:24.110210  385781 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-438604-m02 && echo "ha-438604-m02" | sudo tee /etc/hostname
	I0408 11:35:24.234521  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604-m02
	
	I0408 11:35:24.234559  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.237824  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.238241  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.238272  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.238517  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:24.238741  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.238952  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.239097  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:24.239278  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:24.239450  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:24.239485  385781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-438604-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-438604-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-438604-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:35:24.362242  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:35:24.362280  385781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:35:24.362298  385781 buildroot.go:174] setting up certificates
	I0408 11:35:24.362311  385781 provision.go:84] configureAuth start
	I0408 11:35:24.362321  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetMachineName
	I0408 11:35:24.362659  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:35:24.365655  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.366126  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.366170  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.366343  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.369125  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.369439  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.369464  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.369641  385781 provision.go:143] copyHostCerts
	I0408 11:35:24.369673  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:35:24.369705  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 11:35:24.369714  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:35:24.369795  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:35:24.369881  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:35:24.369899  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 11:35:24.369907  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:35:24.369929  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:35:24.369984  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:35:24.370012  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 11:35:24.370018  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:35:24.370053  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:35:24.370132  385781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.ha-438604-m02 san=[127.0.0.1 192.168.39.219 ha-438604-m02 localhost minikube]
	I0408 11:35:24.565808  385781 provision.go:177] copyRemoteCerts
	I0408 11:35:24.565885  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:35:24.565921  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.568808  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.569116  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.569151  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.569313  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:24.569531  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.569725  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:24.569861  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:35:24.659113  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 11:35:24.659185  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:35:24.686861  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 11:35:24.686942  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 11:35:24.714397  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 11:35:24.714472  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 11:35:24.740893  385781 provision.go:87] duration metric: took 378.567432ms to configureAuth
	I0408 11:35:24.740932  385781 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:35:24.741131  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:35:24.741251  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:24.744030  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.744384  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:24.744419  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:24.744618  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:24.744839  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.745029  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:24.745181  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:24.745369  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:24.745557  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:24.745573  385781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:35:25.029666  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:35:25.029709  385781 main.go:141] libmachine: Checking connection to Docker...
	I0408 11:35:25.029721  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetURL
	I0408 11:35:25.031297  385781 main.go:141] libmachine: (ha-438604-m02) DBG | Using libvirt version 6000000
	I0408 11:35:25.033496  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.033854  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.033888  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.034045  385781 main.go:141] libmachine: Docker is up and running!
	I0408 11:35:25.034063  385781 main.go:141] libmachine: Reticulating splines...
	I0408 11:35:25.034072  385781 client.go:171] duration metric: took 23.884615127s to LocalClient.Create
	I0408 11:35:25.034102  385781 start.go:167] duration metric: took 23.884683605s to libmachine.API.Create "ha-438604"
	I0408 11:35:25.034115  385781 start.go:293] postStartSetup for "ha-438604-m02" (driver="kvm2")
	I0408 11:35:25.034132  385781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:35:25.034159  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.034439  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:35:25.034467  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:25.036530  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.036862  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.036890  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.037039  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:25.037302  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.037493  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:25.037655  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:35:25.127334  385781 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:35:25.132049  385781 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:35:25.132086  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:35:25.132166  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:35:25.132247  385781 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 11:35:25.132258  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 11:35:25.132340  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 11:35:25.143186  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:35:25.168828  385781 start.go:296] duration metric: took 134.691063ms for postStartSetup
	I0408 11:35:25.168896  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetConfigRaw
	I0408 11:35:25.169508  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:35:25.172095  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.172517  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.172549  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.172752  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:35:25.172963  385781 start.go:128] duration metric: took 24.042813058s to createHost
	I0408 11:35:25.172988  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:25.175491  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.175816  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.175849  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.176039  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:25.176289  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.176489  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.176688  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:25.176859  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:35:25.177080  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0408 11:35:25.177094  385781 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:35:25.289062  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576125.259373380
	
	I0408 11:35:25.289094  385781 fix.go:216] guest clock: 1712576125.259373380
	I0408 11:35:25.289110  385781 fix.go:229] Guest: 2024-04-08 11:35:25.25937338 +0000 UTC Remote: 2024-04-08 11:35:25.172976644 +0000 UTC m=+83.156025480 (delta=86.396736ms)
	I0408 11:35:25.289132  385781 fix.go:200] guest clock delta is within tolerance: 86.396736ms
	I0408 11:35:25.289140  385781 start.go:83] releasing machines lock for "ha-438604-m02", held for 24.15906757s
	I0408 11:35:25.289169  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.289462  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:35:25.292050  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.292434  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.292462  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.295135  385781 out.go:177] * Found network options:
	I0408 11:35:25.296672  385781 out.go:177]   - NO_PROXY=192.168.39.99
	W0408 11:35:25.298045  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 11:35:25.298077  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.298663  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.298889  385781 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:35:25.298977  385781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:35:25.299025  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	W0408 11:35:25.299347  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 11:35:25.299425  385781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:35:25.299447  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:35:25.301963  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.302231  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.302325  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.302355  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.302521  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:25.302605  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:25.302633  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:25.302738  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.302808  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:35:25.302949  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:25.302958  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:35:25.303128  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:35:25.303181  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:35:25.303259  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:35:25.543966  385781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:35:25.550799  385781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:35:25.550871  385781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:35:25.568455  385781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 11:35:25.568492  385781 start.go:494] detecting cgroup driver to use...
	I0408 11:35:25.568573  385781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:35:25.588994  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:35:25.605132  385781 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:35:25.605214  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:35:25.620512  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:35:25.636154  385781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:35:25.757479  385781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:35:25.896785  385781 docker.go:233] disabling docker service ...
	I0408 11:35:25.896866  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:35:25.912910  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:35:25.926867  385781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:35:26.076910  385781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:35:26.219444  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:35:26.234391  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:35:26.254212  385781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:35:26.254293  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.266948  385781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:35:26.267033  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.279161  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.290792  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.302547  385781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:35:26.314375  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.325941  385781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.344703  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:35:26.357000  385781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:35:26.367883  385781 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 11:35:26.367961  385781 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 11:35:26.383007  385781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:35:26.394174  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:35:26.535534  385781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:35:26.689603  385781 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:35:26.689697  385781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:35:26.694793  385781 start.go:562] Will wait 60s for crictl version
	I0408 11:35:26.694856  385781 ssh_runner.go:195] Run: which crictl
	I0408 11:35:26.698809  385781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:35:26.737497  385781 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:35:26.737591  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:35:26.767566  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:35:26.799948  385781 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:35:26.801685  385781 out.go:177]   - env NO_PROXY=192.168.39.99
	I0408 11:35:26.803419  385781 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:35:26.806533  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:26.806893  385781 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:35:16 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:35:26.806934  385781 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:35:26.807121  385781 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:35:26.811543  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:35:26.824447  385781 mustload.go:65] Loading cluster: ha-438604
	I0408 11:35:26.824673  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:35:26.824942  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:26.824971  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:26.840692  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0408 11:35:26.841177  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:26.841729  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:26.841756  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:26.842116  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:26.842360  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:35:26.843929  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:35:26.844297  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:35:26.844324  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:35:26.859195  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0408 11:35:26.859669  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:35:26.860201  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:35:26.860232  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:35:26.860680  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:35:26.860896  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:35:26.861190  385781 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604 for IP: 192.168.39.219
	I0408 11:35:26.861206  385781 certs.go:194] generating shared ca certs ...
	I0408 11:35:26.861223  385781 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:26.861413  385781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:35:26.861462  385781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:35:26.861476  385781 certs.go:256] generating profile certs ...
	I0408 11:35:26.861593  385781 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key
	I0408 11:35:26.861627  385781 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.820e8a5f
	I0408 11:35:26.861649  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.820e8a5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.219 192.168.39.254]
	I0408 11:35:26.945516  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.820e8a5f ...
	I0408 11:35:26.945554  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.820e8a5f: {Name:mk2fa49de500562c209edfcdad78aac14f2fcad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:26.945764  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.820e8a5f ...
	I0408 11:35:26.945788  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.820e8a5f: {Name:mka54ad1fc6dd7a6cccca4f8741d6cd51c1a29d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:35:26.945884  385781 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.820e8a5f -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt
	I0408 11:35:26.946053  385781 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.820e8a5f -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key
	I0408 11:35:26.946246  385781 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key
	I0408 11:35:26.946271  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 11:35:26.946285  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 11:35:26.946295  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 11:35:26.946308  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 11:35:26.946322  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 11:35:26.946337  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 11:35:26.946354  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 11:35:26.946370  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 11:35:26.946437  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 11:35:26.946478  385781 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 11:35:26.946491  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:35:26.946520  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:35:26.946549  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:35:26.946584  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:35:26.946635  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:35:26.946674  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 11:35:26.946696  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 11:35:26.946710  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:35:26.946761  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:35:26.950107  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:26.950489  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:35:26.950519  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:35:26.950649  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:35:26.950865  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:35:26.951078  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:35:26.951244  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:35:27.032139  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0408 11:35:27.037846  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0408 11:35:27.049435  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0408 11:35:27.054099  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0408 11:35:27.067647  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0408 11:35:27.075508  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0408 11:35:27.090104  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0408 11:35:27.094859  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0408 11:35:27.106927  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0408 11:35:27.112469  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0408 11:35:27.125838  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0408 11:35:27.130420  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0408 11:35:27.142630  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:35:27.169237  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:35:27.195177  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:35:27.220637  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:35:27.246050  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0408 11:35:27.271158  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 11:35:27.297173  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:35:27.322364  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:35:27.348427  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 11:35:27.374612  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 11:35:27.401527  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:35:27.428324  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0408 11:35:27.446364  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0408 11:35:27.463903  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0408 11:35:27.482292  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0408 11:35:27.500782  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0408 11:35:27.518790  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0408 11:35:27.537117  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0408 11:35:27.555554  385781 ssh_runner.go:195] Run: openssl version
	I0408 11:35:27.561414  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 11:35:27.572343  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 11:35:27.577056  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 11:35:27.577129  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 11:35:27.582910  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 11:35:27.593852  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 11:35:27.605057  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 11:35:27.609452  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 11:35:27.609519  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 11:35:27.615618  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 11:35:27.627111  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:35:27.639164  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:35:27.644102  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:35:27.644161  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:35:27.649966  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:35:27.661463  385781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:35:27.665724  385781 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 11:35:27.665778  385781 kubeadm.go:928] updating node {m02 192.168.39.219 8443 v1.29.3 crio true true} ...
	I0408 11:35:27.665885  385781 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-438604-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:35:27.665924  385781 kube-vip.go:111] generating kube-vip config ...
	I0408 11:35:27.665967  385781 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 11:35:27.684390  385781 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0408 11:35:27.684478  385781 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 11:35:27.684558  385781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:35:27.695337  385781 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0408 11:35:27.695416  385781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0408 11:35:27.705672  385781 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0408 11:35:27.705685  385781 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0408 11:35:27.705740  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0408 11:35:27.705692  385781 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0408 11:35:27.705833  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0408 11:35:27.710620  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0408 11:35:27.710649  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0408 11:35:29.600107  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0408 11:35:29.600205  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0408 11:35:29.605583  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0408 11:35:29.605627  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0408 11:36:01.397808  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:36:01.418473  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0408 11:36:01.418592  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0408 11:36:01.424158  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0408 11:36:01.424199  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0408 11:36:01.935410  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0408 11:36:01.946923  385781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0408 11:36:01.965291  385781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:36:01.983989  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0408 11:36:02.002710  385781 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0408 11:36:02.007428  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:36:02.021446  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:36:02.160368  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:36:02.180428  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:36:02.180967  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:36:02.181029  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:36:02.196781  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I0408 11:36:02.197389  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:36:02.198141  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:36:02.198159  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:36:02.198619  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:36:02.198891  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:36:02.199131  385781 start.go:316] joinCluster: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:36:02.199260  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0408 11:36:02.199281  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:36:02.202792  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:36:02.203299  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:36:02.203328  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:36:02.203643  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:36:02.203852  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:36:02.204105  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:36:02.204288  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:36:02.373316  385781 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:36:02.373373  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq7sng.a232pzw4qrf0cj6i --discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-438604-m02 --control-plane --apiserver-advertise-address=192.168.39.219 --apiserver-bind-port=8443"
	I0408 11:36:27.381682  385781 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qq7sng.a232pzw4qrf0cj6i --discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-438604-m02 --control-plane --apiserver-advertise-address=192.168.39.219 --apiserver-bind-port=8443": (25.008277641s)
	I0408 11:36:27.381729  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0408 11:36:27.804605  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-438604-m02 minikube.k8s.io/updated_at=2024_04_08T11_36_27_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=ha-438604 minikube.k8s.io/primary=false
	I0408 11:36:27.944930  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-438604-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0408 11:36:28.059560  385781 start.go:318] duration metric: took 25.860422388s to joinCluster
	I0408 11:36:28.059655  385781 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:36:28.061481  385781 out.go:177] * Verifying Kubernetes components...
	I0408 11:36:28.060076  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:36:28.062973  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:36:28.222019  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:36:28.242681  385781 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:36:28.242963  385781 kapi.go:59] client config for ha-438604: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt", KeyFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key", CAFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5db80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0408 11:36:28.243034  385781 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I0408 11:36:28.243247  385781 node_ready.go:35] waiting up to 6m0s for node "ha-438604-m02" to be "Ready" ...
	I0408 11:36:28.243422  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:28.243435  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:28.243445  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:28.243451  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:28.253757  385781 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0408 11:36:28.743566  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:28.743591  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:28.743600  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:28.743604  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:28.747081  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:29.244195  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:29.244221  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:29.244230  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:29.244234  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:29.248119  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:29.744435  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:29.744457  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:29.744466  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:29.744470  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:29.755676  385781 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0408 11:36:30.244065  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:30.244092  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:30.244100  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:30.244104  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:30.247540  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:30.248415  385781 node_ready.go:53] node "ha-438604-m02" has status "Ready":"False"
	I0408 11:36:30.743602  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:30.743636  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:30.743647  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:30.743654  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:30.748477  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:31.244499  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:31.244533  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:31.244544  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:31.244550  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:31.248385  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:31.744452  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:31.744512  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:31.744525  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:31.744531  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:31.748568  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:32.244258  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:32.244284  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:32.244294  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:32.244301  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:32.249131  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:32.249751  385781 node_ready.go:53] node "ha-438604-m02" has status "Ready":"False"
	I0408 11:36:32.744232  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:32.744256  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:32.744264  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:32.744268  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:32.748509  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:33.243777  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:33.243804  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:33.243815  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:33.243822  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:33.248010  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:33.743860  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:33.743891  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:33.743903  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:33.743909  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:33.754454  385781 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0408 11:36:34.243482  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:34.243525  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.243536  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.243542  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.249036  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:36:34.251293  385781 node_ready.go:53] node "ha-438604-m02" has status "Ready":"False"
	I0408 11:36:34.743650  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:34.743678  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.743703  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.743709  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.747472  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.748289  385781 node_ready.go:49] node "ha-438604-m02" has status "Ready":"True"
	I0408 11:36:34.748316  385781 node_ready.go:38] duration metric: took 6.505051931s for node "ha-438604-m02" to be "Ready" ...
	I0408 11:36:34.748339  385781 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:36:34.748424  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:34.748436  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.748447  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.748453  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.754379  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:36:34.760411  385781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7gpzq" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.760504  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-7gpzq
	I0408 11:36:34.760509  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.760516  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.760523  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.764292  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.764880  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:34.764895  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.764902  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.764907  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.767984  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.768572  385781 pod_ready.go:92] pod "coredns-76f75df574-7gpzq" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:34.768595  385781 pod_ready.go:81] duration metric: took 8.155667ms for pod "coredns-76f75df574-7gpzq" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.768605  385781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wqrvc" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.768662  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-wqrvc
	I0408 11:36:34.768670  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.768677  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.768681  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.773329  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:34.773967  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:34.773984  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.773991  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.773994  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.780542  385781 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0408 11:36:34.781119  385781 pod_ready.go:92] pod "coredns-76f75df574-wqrvc" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:34.781142  385781 pod_ready.go:81] duration metric: took 12.529681ms for pod "coredns-76f75df574-wqrvc" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.781157  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.781230  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604
	I0408 11:36:34.781241  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.781251  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.781257  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.784634  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.785244  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:34.785260  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.785267  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.785272  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.788038  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:34.788594  385781 pod_ready.go:92] pod "etcd-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:34.788613  385781 pod_ready.go:81] duration metric: took 7.449373ms for pod "etcd-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.788623  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:34.788676  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:34.788684  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.788690  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.788695  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.791720  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:34.792508  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:34.792536  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:34.792544  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:34.792548  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:34.794924  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:35.288893  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:35.288933  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:35.288945  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:35.288951  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:35.293036  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:35.294052  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:35.294068  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:35.294076  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:35.294079  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:35.297225  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:35.789111  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:35.789138  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:35.789145  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:35.789150  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:35.792783  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:35.793601  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:35.793616  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:35.793624  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:35.793629  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:35.796417  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:36.289582  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:36.289611  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:36.289626  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:36.289633  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:36.293285  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:36.293901  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:36.293918  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:36.293926  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:36.293929  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:36.296833  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:36.788843  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:36.788874  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:36.788882  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:36.788886  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:36.793133  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:36.794171  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:36.794186  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:36.794194  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:36.794197  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:36.797235  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:36.797863  385781 pod_ready.go:102] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"False"
	I0408 11:36:37.289391  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:37.289419  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:37.289430  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:37.289434  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:37.293155  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:37.293980  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:37.293999  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:37.294007  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:37.294011  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:37.296987  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:37.789029  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:37.789059  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:37.789067  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:37.789070  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:37.793365  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:37.794092  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:37.794108  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:37.794116  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:37.794119  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:37.797369  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:38.289260  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:38.289285  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:38.289293  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:38.289296  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:38.292902  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:38.293678  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:38.293693  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:38.293701  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:38.293704  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:38.296355  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:38.789345  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:38.789373  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:38.789385  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:38.789393  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:38.793214  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:38.794044  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:38.794060  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:38.794068  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:38.794072  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:38.797384  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:38.798205  385781 pod_ready.go:102] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"False"
	I0408 11:36:39.289125  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:39.289146  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:39.289155  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:39.289158  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:39.293122  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:39.293795  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:39.293812  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:39.293820  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:39.293823  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:39.296538  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:39.789721  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:39.789751  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:39.789760  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:39.789764  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:39.793599  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:39.794544  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:39.794563  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:39.794572  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:39.794578  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:39.797939  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:40.289517  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:40.289545  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:40.289554  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:40.289559  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:40.293709  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:40.294341  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:40.294360  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:40.294367  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:40.294371  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:40.297903  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:40.788867  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:40.788895  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:40.788904  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:40.788909  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:40.792873  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:40.793477  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:40.793519  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:40.793534  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:40.793540  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:40.796818  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.289529  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:41.289557  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.289565  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.289570  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.294522  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:41.295448  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.295465  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.295473  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.295478  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.299189  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.299774  385781 pod_ready.go:102] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"False"
	I0408 11:36:41.789182  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:36:41.789215  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.789227  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.789234  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.793831  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:41.794616  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.794637  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.794653  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.794660  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.798274  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.798979  385781 pod_ready.go:92] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.799008  385781 pod_ready.go:81] duration metric: took 7.0103782s for pod "etcd-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.799031  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.799113  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604
	I0408 11:36:41.799125  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.799136  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.799142  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.802293  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.803080  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:41.803098  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.803106  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.803110  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.806600  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.807195  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.807218  385781 pod_ready.go:81] duration metric: took 8.178645ms for pod "kube-apiserver-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.807229  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.807297  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m02
	I0408 11:36:41.807308  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.807317  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.807331  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.810383  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.811020  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.811034  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.811041  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.811046  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.813960  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:41.814540  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.814558  385781 pod_ready.go:81] duration metric: took 7.322437ms for pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.814568  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.814624  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604
	I0408 11:36:41.814631  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.814638  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.814642  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.817199  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:41.818052  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:41.818067  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.818073  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.818076  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.820761  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:41.821564  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.821584  385781 pod_ready.go:81] duration metric: took 7.008859ms for pod "kube-controller-manager-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.821594  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.821643  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m02
	I0408 11:36:41.821651  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.821658  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.821663  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.824384  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:36:41.943994  385781 request.go:629] Waited for 118.909495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.944065  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:41.944070  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:41.944077  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:41.944080  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:41.947809  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:41.948434  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:41.948461  385781 pod_ready.go:81] duration metric: took 126.859334ms for pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:41.948481  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5vc66" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.143717  385781 request.go:629] Waited for 195.137496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5vc66
	I0408 11:36:42.143794  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5vc66
	I0408 11:36:42.143799  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.143806  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.143810  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.147303  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:42.343754  385781 request.go:629] Waited for 195.589457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:42.343864  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:42.343869  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.343877  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.343880  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.347551  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:42.348130  385781 pod_ready.go:92] pod "kube-proxy-5vc66" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:42.348153  385781 pod_ready.go:81] duration metric: took 399.662514ms for pod "kube-proxy-5vc66" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.348166  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v98zm" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.544200  385781 request.go:629] Waited for 195.950833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v98zm
	I0408 11:36:42.544286  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v98zm
	I0408 11:36:42.544292  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.544302  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.544309  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.548402  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:42.744504  385781 request.go:629] Waited for 195.398875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:42.744603  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:42.744613  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.744622  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.744627  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.748502  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:42.749324  385781 pod_ready.go:92] pod "kube-proxy-v98zm" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:42.749352  385781 pod_ready.go:81] duration metric: took 401.175152ms for pod "kube-proxy-v98zm" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.749365  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:42.944443  385781 request.go:629] Waited for 194.973445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604
	I0408 11:36:42.944547  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604
	I0408 11:36:42.944561  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:42.944571  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:42.944578  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:42.948915  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:43.143974  385781 request.go:629] Waited for 194.38792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:43.144056  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:36:43.144063  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.144072  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.144078  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.147512  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:43.148209  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:43.148235  385781 pod_ready.go:81] duration metric: took 398.861276ms for pod "kube-scheduler-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:43.148250  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:43.344310  385781 request.go:629] Waited for 195.952368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m02
	I0408 11:36:43.344377  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m02
	I0408 11:36:43.344384  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.344391  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.344396  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.348251  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:43.544502  385781 request.go:629] Waited for 195.28393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:43.544570  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:36:43.544574  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.544583  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.544588  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.548549  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:36:43.549219  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:36:43.549237  385781 pod_ready.go:81] duration metric: took 400.978745ms for pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:36:43.549253  385781 pod_ready.go:38] duration metric: took 8.800894972s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:36:43.549279  385781 api_server.go:52] waiting for apiserver process to appear ...
	I0408 11:36:43.549343  385781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:36:43.567277  385781 api_server.go:72] duration metric: took 15.507562921s to wait for apiserver process to appear ...
	I0408 11:36:43.567306  385781 api_server.go:88] waiting for apiserver healthz status ...
	I0408 11:36:43.567328  385781 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I0408 11:36:43.572315  385781 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I0408 11:36:43.572420  385781 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I0408 11:36:43.572432  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.572440  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.572445  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.573606  385781 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0408 11:36:43.573737  385781 api_server.go:141] control plane version: v1.29.3
	I0408 11:36:43.573764  385781 api_server.go:131] duration metric: took 6.450273ms to wait for apiserver health ...
	I0408 11:36:43.573776  385781 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 11:36:43.744235  385781 request.go:629] Waited for 170.361884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:43.744324  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:43.744332  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.744342  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.744349  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.752886  385781 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0408 11:36:43.758544  385781 system_pods.go:59] 17 kube-system pods found
	I0408 11:36:43.758587  385781 system_pods.go:61] "coredns-76f75df574-7gpzq" [761f75a2-9b79-4c09-b91d-5b031e0688d4] Running
	I0408 11:36:43.758594  385781 system_pods.go:61] "coredns-76f75df574-wqrvc" [39dd6d41-947e-4b4f-85a8-99fd88d1f4d0] Running
	I0408 11:36:43.758599  385781 system_pods.go:61] "etcd-ha-438604" [6bc23e20-ec88-43c2-9f22-27b0f35e324d] Running
	I0408 11:36:43.758604  385781 system_pods.go:61] "etcd-ha-438604-m02" [f1ef4ec0-f88d-4761-818a-0bc965fba5b5] Running
	I0408 11:36:43.758609  385781 system_pods.go:61] "kindnet-82krw" [8d313f06-523c-446e-a047-640980b34c0e] Running
	I0408 11:36:43.758613  385781 system_pods.go:61] "kindnet-b5ztk" [dbc5294d-91d1-4ed1-880a-963b794e15b8] Running
	I0408 11:36:43.758617  385781 system_pods.go:61] "kube-apiserver-ha-438604" [afa3acb0-ad88-4559-a03d-389e2e954808] Running
	I0408 11:36:43.758622  385781 system_pods.go:61] "kube-apiserver-ha-438604-m02" [85c68c5f-7b60-4f96-aba0-26ea6fc4541f] Running
	I0408 11:36:43.758630  385781 system_pods.go:61] "kube-controller-manager-ha-438604" [f0e0f607-8d5e-4e7e-9be5-953f1fe9851a] Running
	I0408 11:36:43.758636  385781 system_pods.go:61] "kube-controller-manager-ha-438604-m02" [d61b1c7c-a0bd-409a-9a22-bc3a134d326f] Running
	I0408 11:36:43.758641  385781 system_pods.go:61] "kube-proxy-5vc66" [a3ad0806-73e4-4275-8564-9972a8176fe5] Running
	I0408 11:36:43.758646  385781 system_pods.go:61] "kube-proxy-v98zm" [b430193d-b6ab-442c-8ff3-12a8f2c144b9] Running
	I0408 11:36:43.758651  385781 system_pods.go:61] "kube-scheduler-ha-438604" [5a772319-a32f-45eb-bd2e-aa7b4a3f31a5] Running
	I0408 11:36:43.758658  385781 system_pods.go:61] "kube-scheduler-ha-438604-m02" [79dc8511-c13f-4bd2-bdb1-d2db88601ef7] Running
	I0408 11:36:43.758666  385781 system_pods.go:61] "kube-vip-ha-438604" [e1ddf46e-d497-49ba-97bc-bc23a32be91a] Running
	I0408 11:36:43.758671  385781 system_pods.go:61] "kube-vip-ha-438604-m02" [6998ad26-26d5-497f-97bd-e816a17444f6] Running
	I0408 11:36:43.758677  385781 system_pods.go:61] "storage-provisioner" [46a902f5-0192-4a86-bfe4-4b4d663402c1] Running
	I0408 11:36:43.758686  385781 system_pods.go:74] duration metric: took 184.900644ms to wait for pod list to return data ...
	I0408 11:36:43.758704  385781 default_sa.go:34] waiting for default service account to be created ...
	I0408 11:36:43.944147  385781 request.go:629] Waited for 185.347535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I0408 11:36:43.944239  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I0408 11:36:43.944244  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:43.944251  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:43.944263  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:43.948890  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:43.949107  385781 default_sa.go:45] found service account: "default"
	I0408 11:36:43.949123  385781 default_sa.go:55] duration metric: took 190.411578ms for default service account to be created ...
	I0408 11:36:43.949133  385781 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 11:36:44.144358  385781 request.go:629] Waited for 195.129265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:44.144427  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:36:44.144432  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:44.144440  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:44.144445  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:44.150184  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:36:44.154262  385781 system_pods.go:86] 17 kube-system pods found
	I0408 11:36:44.154290  385781 system_pods.go:89] "coredns-76f75df574-7gpzq" [761f75a2-9b79-4c09-b91d-5b031e0688d4] Running
	I0408 11:36:44.154296  385781 system_pods.go:89] "coredns-76f75df574-wqrvc" [39dd6d41-947e-4b4f-85a8-99fd88d1f4d0] Running
	I0408 11:36:44.154300  385781 system_pods.go:89] "etcd-ha-438604" [6bc23e20-ec88-43c2-9f22-27b0f35e324d] Running
	I0408 11:36:44.154304  385781 system_pods.go:89] "etcd-ha-438604-m02" [f1ef4ec0-f88d-4761-818a-0bc965fba5b5] Running
	I0408 11:36:44.154307  385781 system_pods.go:89] "kindnet-82krw" [8d313f06-523c-446e-a047-640980b34c0e] Running
	I0408 11:36:44.154311  385781 system_pods.go:89] "kindnet-b5ztk" [dbc5294d-91d1-4ed1-880a-963b794e15b8] Running
	I0408 11:36:44.154315  385781 system_pods.go:89] "kube-apiserver-ha-438604" [afa3acb0-ad88-4559-a03d-389e2e954808] Running
	I0408 11:36:44.154319  385781 system_pods.go:89] "kube-apiserver-ha-438604-m02" [85c68c5f-7b60-4f96-aba0-26ea6fc4541f] Running
	I0408 11:36:44.154323  385781 system_pods.go:89] "kube-controller-manager-ha-438604" [f0e0f607-8d5e-4e7e-9be5-953f1fe9851a] Running
	I0408 11:36:44.154327  385781 system_pods.go:89] "kube-controller-manager-ha-438604-m02" [d61b1c7c-a0bd-409a-9a22-bc3a134d326f] Running
	I0408 11:36:44.154331  385781 system_pods.go:89] "kube-proxy-5vc66" [a3ad0806-73e4-4275-8564-9972a8176fe5] Running
	I0408 11:36:44.154334  385781 system_pods.go:89] "kube-proxy-v98zm" [b430193d-b6ab-442c-8ff3-12a8f2c144b9] Running
	I0408 11:36:44.154338  385781 system_pods.go:89] "kube-scheduler-ha-438604" [5a772319-a32f-45eb-bd2e-aa7b4a3f31a5] Running
	I0408 11:36:44.154342  385781 system_pods.go:89] "kube-scheduler-ha-438604-m02" [79dc8511-c13f-4bd2-bdb1-d2db88601ef7] Running
	I0408 11:36:44.154346  385781 system_pods.go:89] "kube-vip-ha-438604" [e1ddf46e-d497-49ba-97bc-bc23a32be91a] Running
	I0408 11:36:44.154350  385781 system_pods.go:89] "kube-vip-ha-438604-m02" [6998ad26-26d5-497f-97bd-e816a17444f6] Running
	I0408 11:36:44.154353  385781 system_pods.go:89] "storage-provisioner" [46a902f5-0192-4a86-bfe4-4b4d663402c1] Running
	I0408 11:36:44.154359  385781 system_pods.go:126] duration metric: took 205.221822ms to wait for k8s-apps to be running ...
	I0408 11:36:44.154379  385781 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 11:36:44.154425  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:36:44.173282  385781 system_svc.go:56] duration metric: took 18.891908ms WaitForService to wait for kubelet
	I0408 11:36:44.173312  385781 kubeadm.go:576] duration metric: took 16.113606667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:36:44.173332  385781 node_conditions.go:102] verifying NodePressure condition ...
	I0408 11:36:44.343651  385781 request.go:629] Waited for 170.234097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I0408 11:36:44.343767  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I0408 11:36:44.343772  385781 round_trippers.go:469] Request Headers:
	I0408 11:36:44.343780  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:36:44.343785  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:36:44.347851  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:36:44.348634  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:36:44.348683  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:36:44.348696  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:36:44.348699  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:36:44.348704  385781 node_conditions.go:105] duration metric: took 175.367276ms to run NodePressure ...
	I0408 11:36:44.348719  385781 start.go:240] waiting for startup goroutines ...
	I0408 11:36:44.348749  385781 start.go:254] writing updated cluster config ...
	I0408 11:36:44.350948  385781 out.go:177] 
	I0408 11:36:44.352496  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:36:44.352594  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:36:44.354576  385781 out.go:177] * Starting "ha-438604-m03" control-plane node in "ha-438604" cluster
	I0408 11:36:44.355714  385781 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:36:44.355745  385781 cache.go:56] Caching tarball of preloaded images
	I0408 11:36:44.355855  385781 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:36:44.355869  385781 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:36:44.355963  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:36:44.356132  385781 start.go:360] acquireMachinesLock for ha-438604-m03: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:36:44.356174  385781 start.go:364] duration metric: took 22.618µs to acquireMachinesLock for "ha-438604-m03"
	I0408 11:36:44.356191  385781 start.go:93] Provisioning new machine with config: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:36:44.356279  385781 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0408 11:36:44.357958  385781 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 11:36:44.358060  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:36:44.358096  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:36:44.373560  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43933
	I0408 11:36:44.374113  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:36:44.374622  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:36:44.374645  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:36:44.375022  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:36:44.375234  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetMachineName
	I0408 11:36:44.375398  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:36:44.375601  385781 start.go:159] libmachine.API.Create for "ha-438604" (driver="kvm2")
	I0408 11:36:44.375640  385781 client.go:168] LocalClient.Create starting
	I0408 11:36:44.375700  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 11:36:44.375747  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:36:44.375770  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:36:44.375843  385781 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 11:36:44.375868  385781 main.go:141] libmachine: Decoding PEM data...
	I0408 11:36:44.375882  385781 main.go:141] libmachine: Parsing certificate...
	I0408 11:36:44.375911  385781 main.go:141] libmachine: Running pre-create checks...
	I0408 11:36:44.375923  385781 main.go:141] libmachine: (ha-438604-m03) Calling .PreCreateCheck
	I0408 11:36:44.376135  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetConfigRaw
	I0408 11:36:44.376529  385781 main.go:141] libmachine: Creating machine...
	I0408 11:36:44.376544  385781 main.go:141] libmachine: (ha-438604-m03) Calling .Create
	I0408 11:36:44.376708  385781 main.go:141] libmachine: (ha-438604-m03) Creating KVM machine...
	I0408 11:36:44.378138  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found existing default KVM network
	I0408 11:36:44.378335  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found existing private KVM network mk-ha-438604
	I0408 11:36:44.378520  385781 main.go:141] libmachine: (ha-438604-m03) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03 ...
	I0408 11:36:44.378552  385781 main.go:141] libmachine: (ha-438604-m03) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:36:44.378612  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:44.378446  386749 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:36:44.378698  385781 main.go:141] libmachine: (ha-438604-m03) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 11:36:44.643553  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:44.643422  386749 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa...
	I0408 11:36:44.816990  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:44.816859  386749 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/ha-438604-m03.rawdisk...
	I0408 11:36:44.817029  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Writing magic tar header
	I0408 11:36:44.817040  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Writing SSH key tar header
	I0408 11:36:44.817048  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:44.817022  386749 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03 ...
	I0408 11:36:44.817215  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03
	I0408 11:36:44.817252  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 11:36:44.817270  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03 (perms=drwx------)
	I0408 11:36:44.817283  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:36:44.817307  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 11:36:44.817321  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 11:36:44.817331  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 11:36:44.817344  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 11:36:44.817356  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home/jenkins
	I0408 11:36:44.817367  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Checking permissions on dir: /home
	I0408 11:36:44.817379  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Skipping /home - not owner
	I0408 11:36:44.817412  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 11:36:44.817434  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 11:36:44.817451  385781 main.go:141] libmachine: (ha-438604-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 11:36:44.817464  385781 main.go:141] libmachine: (ha-438604-m03) Creating domain...
	I0408 11:36:44.818263  385781 main.go:141] libmachine: (ha-438604-m03) define libvirt domain using xml: 
	I0408 11:36:44.818279  385781 main.go:141] libmachine: (ha-438604-m03) <domain type='kvm'>
	I0408 11:36:44.818289  385781 main.go:141] libmachine: (ha-438604-m03)   <name>ha-438604-m03</name>
	I0408 11:36:44.818296  385781 main.go:141] libmachine: (ha-438604-m03)   <memory unit='MiB'>2200</memory>
	I0408 11:36:44.818305  385781 main.go:141] libmachine: (ha-438604-m03)   <vcpu>2</vcpu>
	I0408 11:36:44.818311  385781 main.go:141] libmachine: (ha-438604-m03)   <features>
	I0408 11:36:44.818318  385781 main.go:141] libmachine: (ha-438604-m03)     <acpi/>
	I0408 11:36:44.818326  385781 main.go:141] libmachine: (ha-438604-m03)     <apic/>
	I0408 11:36:44.818332  385781 main.go:141] libmachine: (ha-438604-m03)     <pae/>
	I0408 11:36:44.818346  385781 main.go:141] libmachine: (ha-438604-m03)     
	I0408 11:36:44.818353  385781 main.go:141] libmachine: (ha-438604-m03)   </features>
	I0408 11:36:44.818360  385781 main.go:141] libmachine: (ha-438604-m03)   <cpu mode='host-passthrough'>
	I0408 11:36:44.818392  385781 main.go:141] libmachine: (ha-438604-m03)   
	I0408 11:36:44.818420  385781 main.go:141] libmachine: (ha-438604-m03)   </cpu>
	I0408 11:36:44.818436  385781 main.go:141] libmachine: (ha-438604-m03)   <os>
	I0408 11:36:44.818449  385781 main.go:141] libmachine: (ha-438604-m03)     <type>hvm</type>
	I0408 11:36:44.818463  385781 main.go:141] libmachine: (ha-438604-m03)     <boot dev='cdrom'/>
	I0408 11:36:44.818474  385781 main.go:141] libmachine: (ha-438604-m03)     <boot dev='hd'/>
	I0408 11:36:44.818491  385781 main.go:141] libmachine: (ha-438604-m03)     <bootmenu enable='no'/>
	I0408 11:36:44.818502  385781 main.go:141] libmachine: (ha-438604-m03)   </os>
	I0408 11:36:44.818512  385781 main.go:141] libmachine: (ha-438604-m03)   <devices>
	I0408 11:36:44.818524  385781 main.go:141] libmachine: (ha-438604-m03)     <disk type='file' device='cdrom'>
	I0408 11:36:44.818552  385781 main.go:141] libmachine: (ha-438604-m03)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/boot2docker.iso'/>
	I0408 11:36:44.818570  385781 main.go:141] libmachine: (ha-438604-m03)       <target dev='hdc' bus='scsi'/>
	I0408 11:36:44.818578  385781 main.go:141] libmachine: (ha-438604-m03)       <readonly/>
	I0408 11:36:44.818586  385781 main.go:141] libmachine: (ha-438604-m03)     </disk>
	I0408 11:36:44.818603  385781 main.go:141] libmachine: (ha-438604-m03)     <disk type='file' device='disk'>
	I0408 11:36:44.818612  385781 main.go:141] libmachine: (ha-438604-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 11:36:44.818620  385781 main.go:141] libmachine: (ha-438604-m03)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/ha-438604-m03.rawdisk'/>
	I0408 11:36:44.818628  385781 main.go:141] libmachine: (ha-438604-m03)       <target dev='hda' bus='virtio'/>
	I0408 11:36:44.818633  385781 main.go:141] libmachine: (ha-438604-m03)     </disk>
	I0408 11:36:44.818641  385781 main.go:141] libmachine: (ha-438604-m03)     <interface type='network'>
	I0408 11:36:44.818647  385781 main.go:141] libmachine: (ha-438604-m03)       <source network='mk-ha-438604'/>
	I0408 11:36:44.818652  385781 main.go:141] libmachine: (ha-438604-m03)       <model type='virtio'/>
	I0408 11:36:44.818659  385781 main.go:141] libmachine: (ha-438604-m03)     </interface>
	I0408 11:36:44.818664  385781 main.go:141] libmachine: (ha-438604-m03)     <interface type='network'>
	I0408 11:36:44.818669  385781 main.go:141] libmachine: (ha-438604-m03)       <source network='default'/>
	I0408 11:36:44.818674  385781 main.go:141] libmachine: (ha-438604-m03)       <model type='virtio'/>
	I0408 11:36:44.818680  385781 main.go:141] libmachine: (ha-438604-m03)     </interface>
	I0408 11:36:44.818690  385781 main.go:141] libmachine: (ha-438604-m03)     <serial type='pty'>
	I0408 11:36:44.818710  385781 main.go:141] libmachine: (ha-438604-m03)       <target port='0'/>
	I0408 11:36:44.818726  385781 main.go:141] libmachine: (ha-438604-m03)     </serial>
	I0408 11:36:44.818739  385781 main.go:141] libmachine: (ha-438604-m03)     <console type='pty'>
	I0408 11:36:44.818763  385781 main.go:141] libmachine: (ha-438604-m03)       <target type='serial' port='0'/>
	I0408 11:36:44.818786  385781 main.go:141] libmachine: (ha-438604-m03)     </console>
	I0408 11:36:44.818800  385781 main.go:141] libmachine: (ha-438604-m03)     <rng model='virtio'>
	I0408 11:36:44.818815  385781 main.go:141] libmachine: (ha-438604-m03)       <backend model='random'>/dev/random</backend>
	I0408 11:36:44.818825  385781 main.go:141] libmachine: (ha-438604-m03)     </rng>
	I0408 11:36:44.818838  385781 main.go:141] libmachine: (ha-438604-m03)     
	I0408 11:36:44.818850  385781 main.go:141] libmachine: (ha-438604-m03)     
	I0408 11:36:44.818863  385781 main.go:141] libmachine: (ha-438604-m03)   </devices>
	I0408 11:36:44.818871  385781 main.go:141] libmachine: (ha-438604-m03) </domain>
	I0408 11:36:44.818886  385781 main.go:141] libmachine: (ha-438604-m03) 
	I0408 11:36:44.826308  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:b7:e6:7b in network default
	I0408 11:36:44.826831  385781 main.go:141] libmachine: (ha-438604-m03) Ensuring networks are active...
	I0408 11:36:44.826857  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:44.827673  385781 main.go:141] libmachine: (ha-438604-m03) Ensuring network default is active
	I0408 11:36:44.827996  385781 main.go:141] libmachine: (ha-438604-m03) Ensuring network mk-ha-438604 is active
	I0408 11:36:44.828425  385781 main.go:141] libmachine: (ha-438604-m03) Getting domain xml...
	I0408 11:36:44.829240  385781 main.go:141] libmachine: (ha-438604-m03) Creating domain...
	I0408 11:36:46.057797  385781 main.go:141] libmachine: (ha-438604-m03) Waiting to get IP...
	I0408 11:36:46.058891  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:46.059419  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:46.059470  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:46.059409  386749 retry.go:31] will retry after 229.460449ms: waiting for machine to come up
	I0408 11:36:46.290968  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:46.291521  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:46.291552  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:46.291461  386749 retry.go:31] will retry after 307.284768ms: waiting for machine to come up
	I0408 11:36:46.601546  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:46.602083  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:46.602120  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:46.602020  386749 retry.go:31] will retry after 327.627325ms: waiting for machine to come up
	I0408 11:36:46.931454  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:46.932038  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:46.932071  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:46.931977  386749 retry.go:31] will retry after 561.835462ms: waiting for machine to come up
	I0408 11:36:47.495895  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:47.496380  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:47.496411  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:47.496323  386749 retry.go:31] will retry after 576.910228ms: waiting for machine to come up
	I0408 11:36:48.075195  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:48.075642  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:48.075669  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:48.075597  386749 retry.go:31] will retry after 903.152639ms: waiting for machine to come up
	I0408 11:36:48.980395  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:48.980909  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:48.980940  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:48.980858  386749 retry.go:31] will retry after 729.415904ms: waiting for machine to come up
	I0408 11:36:49.712423  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:49.712861  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:49.712894  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:49.712804  386749 retry.go:31] will retry after 1.330546456s: waiting for machine to come up
	I0408 11:36:51.044838  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:51.045340  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:51.045365  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:51.045301  386749 retry.go:31] will retry after 1.572213961s: waiting for machine to come up
	I0408 11:36:52.620114  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:52.620704  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:52.620738  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:52.620664  386749 retry.go:31] will retry after 1.486096453s: waiting for machine to come up
	I0408 11:36:54.109491  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:54.110034  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:54.110066  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:54.109972  386749 retry.go:31] will retry after 2.645739084s: waiting for machine to come up
	I0408 11:36:56.757778  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:36:56.758368  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:36:56.758401  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:36:56.758295  386749 retry.go:31] will retry after 3.332565363s: waiting for machine to come up
	I0408 11:37:00.092561  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:00.093016  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:37:00.093049  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:37:00.092944  386749 retry.go:31] will retry after 3.296166589s: waiting for machine to come up
	I0408 11:37:03.393531  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:03.393975  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find current IP address of domain ha-438604-m03 in network mk-ha-438604
	I0408 11:37:03.394000  385781 main.go:141] libmachine: (ha-438604-m03) DBG | I0408 11:37:03.393924  386749 retry.go:31] will retry after 4.35483244s: waiting for machine to come up
	I0408 11:37:07.750339  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.750804  385781 main.go:141] libmachine: (ha-438604-m03) Found IP for machine: 192.168.39.94
	I0408 11:37:07.750840  385781 main.go:141] libmachine: (ha-438604-m03) Reserving static IP address...
	I0408 11:37:07.750853  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has current primary IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.751356  385781 main.go:141] libmachine: (ha-438604-m03) DBG | unable to find host DHCP lease matching {name: "ha-438604-m03", mac: "52:54:00:fa:7c:74", ip: "192.168.39.94"} in network mk-ha-438604
	I0408 11:37:07.840885  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Getting to WaitForSSH function...
	I0408 11:37:07.840923  385781 main.go:141] libmachine: (ha-438604-m03) Reserved static IP address: 192.168.39.94
	I0408 11:37:07.840938  385781 main.go:141] libmachine: (ha-438604-m03) Waiting for SSH to be available...
	I0408 11:37:07.844040  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.844579  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:07.844614  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.844821  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Using SSH client type: external
	I0408 11:37:07.844854  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa (-rw-------)
	I0408 11:37:07.844890  385781 main.go:141] libmachine: (ha-438604-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 11:37:07.844910  385781 main.go:141] libmachine: (ha-438604-m03) DBG | About to run SSH command:
	I0408 11:37:07.844931  385781 main.go:141] libmachine: (ha-438604-m03) DBG | exit 0
	I0408 11:37:07.975976  385781 main.go:141] libmachine: (ha-438604-m03) DBG | SSH cmd err, output: <nil>: 
	I0408 11:37:07.976259  385781 main.go:141] libmachine: (ha-438604-m03) KVM machine creation complete!
	I0408 11:37:07.976640  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetConfigRaw
	I0408 11:37:07.977212  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:07.977449  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:07.977639  385781 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 11:37:07.977652  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:37:07.978945  385781 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 11:37:07.978972  385781 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 11:37:07.978993  385781 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 11:37:07.979004  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:07.981555  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.981934  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:07.981964  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:07.982168  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:07.982360  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:07.982580  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:07.982737  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:07.982952  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:07.983277  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:07.983293  385781 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 11:37:08.095435  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:37:08.095458  385781 main.go:141] libmachine: Detecting the provisioner...
	I0408 11:37:08.095466  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.098194  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.098548  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.098581  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.098727  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.098972  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.099174  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.099345  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.099506  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:08.099720  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:08.099733  385781 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 11:37:08.217134  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 11:37:08.217235  385781 main.go:141] libmachine: found compatible host: buildroot
	I0408 11:37:08.217254  385781 main.go:141] libmachine: Provisioning with buildroot...
	I0408 11:37:08.217269  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetMachineName
	I0408 11:37:08.217685  385781 buildroot.go:166] provisioning hostname "ha-438604-m03"
	I0408 11:37:08.217714  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetMachineName
	I0408 11:37:08.217960  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.220587  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.221036  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.221062  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.221207  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.221485  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.221693  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.221878  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.222065  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:08.222294  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:08.222311  385781 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-438604-m03 && echo "ha-438604-m03" | sudo tee /etc/hostname
	I0408 11:37:08.352555  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604-m03
	
	I0408 11:37:08.352592  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.355632  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.356068  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.356093  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.356293  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.356525  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.356690  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.356874  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.357051  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:08.357266  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:08.357290  385781 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-438604-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-438604-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-438604-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:37:08.479375  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:37:08.479424  385781 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:37:08.479446  385781 buildroot.go:174] setting up certificates
	I0408 11:37:08.479458  385781 provision.go:84] configureAuth start
	I0408 11:37:08.479472  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetMachineName
	I0408 11:37:08.479799  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:37:08.482989  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.483383  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.483422  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.483585  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.485698  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.486004  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.486034  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.486202  385781 provision.go:143] copyHostCerts
	I0408 11:37:08.486239  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:37:08.486272  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 11:37:08.486281  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:37:08.486366  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:37:08.486441  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:37:08.486458  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 11:37:08.486465  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:37:08.486486  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:37:08.486531  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:37:08.486554  385781 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 11:37:08.486562  385781 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:37:08.486586  385781 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:37:08.486643  385781 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.ha-438604-m03 san=[127.0.0.1 192.168.39.94 ha-438604-m03 localhost minikube]
	I0408 11:37:08.592303  385781 provision.go:177] copyRemoteCerts
	I0408 11:37:08.592372  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:37:08.592406  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.595262  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.595748  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.595786  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.595992  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.596254  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.596430  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.596621  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:37:08.687708  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 11:37:08.687789  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:37:08.715553  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 11:37:08.715634  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 11:37:08.745648  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 11:37:08.745722  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 11:37:08.773099  385781 provision.go:87] duration metric: took 293.624604ms to configureAuth
	I0408 11:37:08.773142  385781 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:37:08.773371  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:37:08.773452  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:08.776051  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.776430  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:08.776461  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:08.776720  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:08.776956  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.777103  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:08.777234  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:08.777466  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:08.777676  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:08.777700  385781 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:37:09.056944  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:37:09.056989  385781 main.go:141] libmachine: Checking connection to Docker...
	I0408 11:37:09.057025  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetURL
	I0408 11:37:09.058445  385781 main.go:141] libmachine: (ha-438604-m03) DBG | Using libvirt version 6000000
	I0408 11:37:09.060835  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.061248  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.061293  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.061482  385781 main.go:141] libmachine: Docker is up and running!
	I0408 11:37:09.061504  385781 main.go:141] libmachine: Reticulating splines...
	I0408 11:37:09.061518  385781 client.go:171] duration metric: took 24.685861155s to LocalClient.Create
	I0408 11:37:09.061547  385781 start.go:167] duration metric: took 24.685946543s to libmachine.API.Create "ha-438604"
	I0408 11:37:09.061560  385781 start.go:293] postStartSetup for "ha-438604-m03" (driver="kvm2")
	I0408 11:37:09.061575  385781 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:37:09.061604  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.061872  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:37:09.061902  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:09.064565  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.064951  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.064985  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.065226  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:09.065442  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.065628  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:09.065802  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:37:09.156100  385781 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:37:09.160949  385781 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:37:09.160986  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:37:09.161064  385781 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:37:09.161145  385781 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 11:37:09.161157  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 11:37:09.161256  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 11:37:09.172986  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:37:09.203536  385781 start.go:296] duration metric: took 141.959614ms for postStartSetup
	I0408 11:37:09.203612  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetConfigRaw
	I0408 11:37:09.204351  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:37:09.207273  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.207708  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.207749  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.208104  385781 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:37:09.208335  385781 start.go:128] duration metric: took 24.852044083s to createHost
	I0408 11:37:09.208365  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:09.211104  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.211536  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.211568  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.211781  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:09.211985  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.212132  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.212303  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:09.212530  385781 main.go:141] libmachine: Using SSH client type: native
	I0408 11:37:09.212700  385781 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0408 11:37:09.212710  385781 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:37:09.325630  385781 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576229.289336082
	
	I0408 11:37:09.325653  385781 fix.go:216] guest clock: 1712576229.289336082
	I0408 11:37:09.325661  385781 fix.go:229] Guest: 2024-04-08 11:37:09.289336082 +0000 UTC Remote: 2024-04-08 11:37:09.208348473 +0000 UTC m=+187.191397319 (delta=80.987609ms)
	I0408 11:37:09.325677  385781 fix.go:200] guest clock delta is within tolerance: 80.987609ms
	I0408 11:37:09.325684  385781 start.go:83] releasing machines lock for "ha-438604-m03", held for 24.969499516s
	I0408 11:37:09.325707  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.325974  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:37:09.328879  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.329376  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.329411  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.331924  385781 out.go:177] * Found network options:
	I0408 11:37:09.333553  385781 out.go:177]   - NO_PROXY=192.168.39.99,192.168.39.219
	W0408 11:37:09.334989  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 11:37:09.335009  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 11:37:09.335028  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.335728  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.335996  385781 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:37:09.336117  385781 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:37:09.336160  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	W0408 11:37:09.336241  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 11:37:09.336271  385781 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 11:37:09.336347  385781 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:37:09.336372  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:37:09.339064  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.339094  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.339500  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.339545  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.339576  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:09.339600  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:09.339741  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:09.339824  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:37:09.339915  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.340004  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:37:09.340020  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:09.340175  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:37:09.340183  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:37:09.340336  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:37:09.586150  385781 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:37:09.595159  385781 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:37:09.595247  385781 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:37:09.616430  385781 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 11:37:09.616466  385781 start.go:494] detecting cgroup driver to use...
	I0408 11:37:09.616543  385781 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:37:09.637204  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:37:09.654536  385781 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:37:09.654619  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:37:09.672473  385781 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:37:09.687985  385781 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:37:09.815363  385781 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:37:09.954588  385781 docker.go:233] disabling docker service ...
	I0408 11:37:09.954680  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:37:09.972200  385781 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:37:09.987847  385781 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:37:10.136313  385781 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:37:10.280740  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:37:10.297553  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:37:10.319544  385781 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:37:10.319607  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.331398  385781 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:37:10.331476  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.343549  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.355505  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.367389  385781 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:37:10.379207  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.390490  385781 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.411310  385781 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:37:10.423526  385781 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:37:10.434358  385781 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 11:37:10.434465  385781 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 11:37:10.448911  385781 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:37:10.460213  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:37:10.603877  385781 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:37:10.771770  385781 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:37:10.771855  385781 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:37:10.777131  385781 start.go:562] Will wait 60s for crictl version
	I0408 11:37:10.777207  385781 ssh_runner.go:195] Run: which crictl
	I0408 11:37:10.781382  385781 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:37:10.820531  385781 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:37:10.820611  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:37:10.851504  385781 ssh_runner.go:195] Run: crio --version
	I0408 11:37:10.885901  385781 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:37:10.887559  385781 out.go:177]   - env NO_PROXY=192.168.39.99
	I0408 11:37:10.888895  385781 out.go:177]   - env NO_PROXY=192.168.39.99,192.168.39.219
	I0408 11:37:10.890184  385781 main.go:141] libmachine: (ha-438604-m03) Calling .GetIP
	I0408 11:37:10.893382  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:10.893804  385781 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:37:10.893837  385781 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:37:10.894084  385781 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:37:10.898729  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:37:10.913466  385781 mustload.go:65] Loading cluster: ha-438604
	I0408 11:37:10.913734  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:37:10.913983  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:37:10.914026  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:37:10.930307  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0408 11:37:10.930770  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:37:10.931305  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:37:10.931321  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:37:10.931677  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:37:10.931927  385781 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:37:10.933537  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:37:10.933822  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:37:10.933866  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:37:10.949890  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42507
	I0408 11:37:10.950379  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:37:10.950915  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:37:10.950941  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:37:10.951324  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:37:10.951606  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:37:10.951834  385781 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604 for IP: 192.168.39.94
	I0408 11:37:10.951850  385781 certs.go:194] generating shared ca certs ...
	I0408 11:37:10.951871  385781 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:37:10.952015  385781 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:37:10.952055  385781 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:37:10.952066  385781 certs.go:256] generating profile certs ...
	I0408 11:37:10.952133  385781 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key
	I0408 11:37:10.952159  385781 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.d8c70499
	I0408 11:37:10.952175  385781 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.d8c70499 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.219 192.168.39.94 192.168.39.254]
	I0408 11:37:11.146003  385781 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.d8c70499 ...
	I0408 11:37:11.146038  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.d8c70499: {Name:mk0ea8c01c5a8fbfaf8fbdffa60e8eddbdccc24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:37:11.146217  385781 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.d8c70499 ...
	I0408 11:37:11.146230  385781 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.d8c70499: {Name:mk7ae3a704ce00bc3504ab883d6549f49766f91e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:37:11.146295  385781 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.d8c70499 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt
	I0408 11:37:11.146423  385781 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.d8c70499 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key
	I0408 11:37:11.146584  385781 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key
	I0408 11:37:11.146603  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 11:37:11.146616  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 11:37:11.146628  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 11:37:11.146642  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 11:37:11.146654  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 11:37:11.146664  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 11:37:11.146675  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 11:37:11.146684  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 11:37:11.146729  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 11:37:11.146760  385781 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 11:37:11.146769  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:37:11.146790  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:37:11.146814  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:37:11.146835  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:37:11.146873  385781 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:37:11.146898  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:37:11.146911  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 11:37:11.146925  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 11:37:11.146960  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:37:11.150357  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:37:11.150720  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:37:11.150757  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:37:11.151022  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:37:11.151258  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:37:11.151452  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:37:11.151631  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:37:11.236259  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0408 11:37:11.241957  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0408 11:37:11.254270  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0408 11:37:11.259970  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0408 11:37:11.274475  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0408 11:37:11.279780  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0408 11:37:11.292418  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0408 11:37:11.297150  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0408 11:37:11.308254  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0408 11:37:11.313162  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0408 11:37:11.324294  385781 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0408 11:37:11.329082  385781 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0408 11:37:11.341906  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:37:11.372757  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:37:11.401159  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:37:11.431526  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:37:11.460059  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0408 11:37:11.492587  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 11:37:11.521088  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:37:11.548892  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:37:11.580086  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:37:11.612454  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 11:37:11.641263  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 11:37:11.670808  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0408 11:37:11.692240  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0408 11:37:11.712198  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0408 11:37:11.732399  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0408 11:37:11.751540  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0408 11:37:11.771024  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0408 11:37:11.792000  385781 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0408 11:37:11.811530  385781 ssh_runner.go:195] Run: openssl version
	I0408 11:37:11.818828  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 11:37:11.831849  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 11:37:11.836953  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 11:37:11.837044  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 11:37:11.843886  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 11:37:11.855839  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:37:11.867984  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:37:11.873242  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:37:11.873327  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:37:11.879807  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:37:11.892409  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 11:37:11.905694  385781 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 11:37:11.911142  385781 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 11:37:11.911222  385781 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 11:37:11.917642  385781 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 11:37:11.929996  385781 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:37:11.934738  385781 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 11:37:11.934811  385781 kubeadm.go:928] updating node {m03 192.168.39.94 8443 v1.29.3 crio true true} ...
	I0408 11:37:11.934917  385781 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-438604-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:37:11.934950  385781 kube-vip.go:111] generating kube-vip config ...
	I0408 11:37:11.935004  385781 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 11:37:11.953628  385781 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0408 11:37:11.953708  385781 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 11:37:11.953764  385781 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:37:11.964496  385781 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0408 11:37:11.964566  385781 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0408 11:37:11.975539  385781 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0408 11:37:11.975580  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0408 11:37:11.975585  385781 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0408 11:37:11.975603  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0408 11:37:11.975607  385781 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0408 11:37:11.975661  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:37:11.975666  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0408 11:37:11.975666  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0408 11:37:11.980908  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0408 11:37:11.980957  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0408 11:37:12.023167  385781 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0408 11:37:12.023175  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0408 11:37:12.023262  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0408 11:37:12.023295  385781 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0408 11:37:12.065496  385781 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0408 11:37:12.065542  385781 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0408 11:37:13.022696  385781 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0408 11:37:13.034529  385781 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0408 11:37:13.053420  385781 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:37:13.073781  385781 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0408 11:37:13.093979  385781 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0408 11:37:13.098407  385781 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 11:37:13.112969  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:37:13.256681  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:37:13.278747  385781 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:37:13.279407  385781 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:37:13.279489  385781 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:37:13.296735  385781 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0408 11:37:13.297203  385781 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:37:13.297808  385781 main.go:141] libmachine: Using API Version  1
	I0408 11:37:13.297836  385781 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:37:13.298183  385781 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:37:13.298451  385781 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:37:13.298600  385781 start.go:316] joinCluster: &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:37:13.298731  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0408 11:37:13.298746  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:37:13.301929  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:37:13.302485  385781 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:37:13.302514  385781 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:37:13.302735  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:37:13.302928  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:37:13.303103  385781 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:37:13.303265  385781 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:37:13.485408  385781 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:37:13.485506  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lvgud.zpbg0h9e2vljhkuc --discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-438604-m03 --control-plane --apiserver-advertise-address=192.168.39.94 --apiserver-bind-port=8443"
	I0408 11:37:38.723813  385781 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lvgud.zpbg0h9e2vljhkuc --discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-438604-m03 --control-plane --apiserver-advertise-address=192.168.39.94 --apiserver-bind-port=8443": (25.238270369s)
	I0408 11:37:38.723869  385781 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0408 11:37:39.175661  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-438604-m03 minikube.k8s.io/updated_at=2024_04_08T11_37_39_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=ha-438604 minikube.k8s.io/primary=false
	I0408 11:37:39.316924  385781 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-438604-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0408 11:37:39.438710  385781 start.go:318] duration metric: took 26.140100004s to joinCluster
	I0408 11:37:39.438843  385781 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 11:37:39.440788  385781 out.go:177] * Verifying Kubernetes components...
	I0408 11:37:39.439179  385781 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:37:39.442451  385781 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:37:39.664745  385781 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:37:39.693760  385781 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:37:39.694147  385781 kapi.go:59] client config for ha-438604: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.crt", KeyFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key", CAFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5db80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0408 11:37:39.694269  385781 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.99:8443
	I0408 11:37:39.694566  385781 node_ready.go:35] waiting up to 6m0s for node "ha-438604-m03" to be "Ready" ...
	I0408 11:37:39.694678  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:39.694694  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:39.694709  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:39.694715  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:39.704933  385781 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0408 11:37:40.195747  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:40.195781  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:40.195793  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:40.195798  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:40.200780  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:40.695221  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:40.695249  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:40.695258  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:40.695263  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:40.698672  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:41.195419  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:41.195447  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:41.195455  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:41.195459  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:41.203523  385781 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0408 11:37:41.694789  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:41.694822  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:41.694834  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:41.694840  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:41.700214  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:37:41.701021  385781 node_ready.go:53] node "ha-438604-m03" has status "Ready":"False"
	I0408 11:37:42.195768  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:42.195798  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:42.195810  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:42.195818  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:42.200064  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:42.695528  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:42.695558  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:42.695568  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:42.695574  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:42.700785  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:37:43.195491  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:43.195519  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:43.195531  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:43.195536  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:43.199386  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:43.695025  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:43.695122  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:43.695147  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:43.695153  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:43.699901  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:44.194848  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:44.194875  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:44.194882  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:44.194886  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:44.199180  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:44.199912  385781 node_ready.go:53] node "ha-438604-m03" has status "Ready":"False"
	I0408 11:37:44.695620  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:44.695653  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:44.695669  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:44.695675  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:44.699624  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:45.195651  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:45.195676  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:45.195698  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:45.195702  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:45.199680  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:45.694889  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:45.694917  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:45.694926  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:45.694930  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:45.698254  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.195612  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:46.195643  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.195651  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.195654  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.199847  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:46.200579  385781 node_ready.go:53] node "ha-438604-m03" has status "Ready":"False"
	I0408 11:37:46.694960  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:46.694986  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.694994  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.694998  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.698503  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.699056  385781 node_ready.go:49] node "ha-438604-m03" has status "Ready":"True"
	I0408 11:37:46.699081  385781 node_ready.go:38] duration metric: took 7.004495577s for node "ha-438604-m03" to be "Ready" ...
	I0408 11:37:46.699090  385781 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:37:46.699153  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:46.699164  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.699171  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.699175  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.706322  385781 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0408 11:37:46.713379  385781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7gpzq" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.713467  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-7gpzq
	I0408 11:37:46.713476  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.713484  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.713489  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.717087  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.717817  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:46.717836  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.717845  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.717852  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.720991  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.721480  385781 pod_ready.go:92] pod "coredns-76f75df574-7gpzq" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:46.721501  385781 pod_ready.go:81] duration metric: took 8.094867ms for pod "coredns-76f75df574-7gpzq" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.721511  385781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-wqrvc" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.721584  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-wqrvc
	I0408 11:37:46.721592  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.721600  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.721608  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.725210  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.726413  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:46.726429  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.726437  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.726444  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.730156  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.730656  385781 pod_ready.go:92] pod "coredns-76f75df574-wqrvc" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:46.730675  385781 pod_ready.go:81] duration metric: took 9.158724ms for pod "coredns-76f75df574-wqrvc" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.730685  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.730742  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604
	I0408 11:37:46.730750  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.730757  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.730763  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.734889  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:46.735488  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:46.735504  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.735517  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.735521  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.739755  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:46.740815  385781 pod_ready.go:92] pod "etcd-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:46.740837  385781 pod_ready.go:81] duration metric: took 10.142816ms for pod "etcd-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.740852  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.740928  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m02
	I0408 11:37:46.740942  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.740951  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.740958  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.744401  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.744944  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:46.744959  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.744967  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.744970  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.748116  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:46.748776  385781 pod_ready.go:92] pod "etcd-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:46.748796  385781 pod_ready.go:81] duration metric: took 7.935841ms for pod "etcd-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.748810  385781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:46.895893  385781 request.go:629] Waited for 146.996455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:46.895997  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:46.896005  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:46.896025  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:46.896035  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:46.899984  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.095987  385781 request.go:629] Waited for 195.192122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.096087  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.096112  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.096129  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.096138  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.099895  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.295041  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:47.295075  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.295087  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.295093  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.299113  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.495346  385781 request.go:629] Waited for 195.401092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.495426  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.495432  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.495444  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.495449  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.499354  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.749136  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:47.749162  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.749171  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.749175  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.753091  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:47.895116  385781 request.go:629] Waited for 141.241107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.895208  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:47.895216  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:47.895228  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:47.895235  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:47.899545  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:48.249964  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:48.249995  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:48.250004  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:48.250011  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:48.253959  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:48.296051  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:48.296079  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:48.296088  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:48.296099  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:48.300169  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:48.749577  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:48.749601  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:48.749612  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:48.749616  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:48.753657  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:48.754501  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:48.754525  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:48.754533  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:48.754537  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:48.757990  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:48.758586  385781 pod_ready.go:102] pod "etcd-ha-438604-m03" in "kube-system" namespace has status "Ready":"False"
	I0408 11:37:49.250110  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-438604-m03
	I0408 11:37:49.250140  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.250153  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.250158  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.253773  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:49.254408  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:49.254427  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.254435  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.254439  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.257796  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:49.258415  385781 pod_ready.go:92] pod "etcd-ha-438604-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:49.258438  385781 pod_ready.go:81] duration metric: took 2.509619072s for pod "etcd-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.258460  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.295858  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604
	I0408 11:37:49.295892  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.295904  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.295912  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.301861  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:37:49.495214  385781 request.go:629] Waited for 192.397135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:49.495285  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:49.495292  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.495304  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.495308  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.499567  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:49.500461  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:49.500486  385781 pod_ready.go:81] duration metric: took 242.01305ms for pod "kube-apiserver-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.500497  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.695924  385781 request.go:629] Waited for 195.350089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m02
	I0408 11:37:49.696036  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m02
	I0408 11:37:49.696049  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.696060  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.696071  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.699467  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:49.895975  385781 request.go:629] Waited for 195.365088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:49.896059  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:49.896065  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:49.896076  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:49.896086  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:49.899934  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:49.900897  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:49.900923  385781 pod_ready.go:81] duration metric: took 400.417819ms for pod "kube-apiserver-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:49.900997  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.095906  385781 request.go:629] Waited for 194.787366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m03
	I0408 11:37:50.095970  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-438604-m03
	I0408 11:37:50.095976  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.095984  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.095988  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.100156  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:50.295923  385781 request.go:629] Waited for 195.000072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:50.296002  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:50.296008  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.296016  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.296021  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.299630  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:50.300544  385781 pod_ready.go:92] pod "kube-apiserver-ha-438604-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:50.300568  385781 pod_ready.go:81] duration metric: took 399.550906ms for pod "kube-apiserver-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.300580  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.495762  385781 request.go:629] Waited for 195.094865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604
	I0408 11:37:50.495848  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604
	I0408 11:37:50.495854  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.495861  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.495866  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.499793  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:50.695931  385781 request.go:629] Waited for 195.307388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:50.696014  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:50.696022  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.696033  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.696049  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.699441  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:50.700455  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:50.700489  385781 pod_ready.go:81] duration metric: took 399.900475ms for pod "kube-controller-manager-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.700516  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:50.895380  385781 request.go:629] Waited for 194.755754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m02
	I0408 11:37:50.895463  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m02
	I0408 11:37:50.895468  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:50.895476  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:50.895484  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:50.899901  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:51.095980  385781 request.go:629] Waited for 195.181145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:51.096058  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:51.096065  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.096080  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.096091  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.099664  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:51.100294  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:51.100317  385781 pod_ready.go:81] duration metric: took 399.791343ms for pod "kube-controller-manager-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:51.100331  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:51.295933  385781 request.go:629] Waited for 195.501353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:51.296019  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:51.296029  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.296042  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.296055  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.299759  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:51.495030  385781 request.go:629] Waited for 194.308964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:51.495102  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:51.495109  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.495118  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.495125  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.500912  385781 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0408 11:37:51.695468  385781 request.go:629] Waited for 94.331993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:51.695561  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:51.695572  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.695583  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.695591  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.699409  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:51.895362  385781 request.go:629] Waited for 195.114116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:51.895440  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:51.895445  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:51.895452  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:51.895457  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:51.899519  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:52.100798  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:52.100824  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:52.100832  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:52.100836  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:52.105071  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:52.296055  385781 request.go:629] Waited for 190.055532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:52.296126  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:52.296131  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:52.296139  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:52.296146  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:52.299846  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:52.600940  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:52.600973  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:52.600983  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:52.600989  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:52.605306  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:52.695330  385781 request.go:629] Waited for 89.268812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:52.695446  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:52.695461  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:52.695472  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:52.695479  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:52.699541  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:53.101280  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:53.101306  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:53.101314  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:53.101318  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:53.105893  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:53.106693  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:53.106719  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:53.106727  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:53.106732  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:53.109962  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:53.110538  385781 pod_ready.go:102] pod "kube-controller-manager-ha-438604-m03" in "kube-system" namespace has status "Ready":"False"
	I0408 11:37:53.601512  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:53.601538  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:53.601546  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:53.601550  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:53.608614  385781 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0408 11:37:53.609258  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:53.609276  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:53.609284  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:53.609288  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:53.612747  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.100936  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-438604-m03
	I0408 11:37:54.100962  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.100971  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.100975  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.104735  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.105410  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:54.105429  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.105436  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.105442  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.108934  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.109589  385781 pod_ready.go:92] pod "kube-controller-manager-ha-438604-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:54.109611  385781 pod_ready.go:81] duration metric: took 3.009273352s for pod "kube-controller-manager-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.109629  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5vc66" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.109692  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5vc66
	I0408 11:37:54.109700  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.109707  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.109712  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.113136  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.295127  385781 request.go:629] Waited for 181.216713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:54.295205  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:54.295215  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.295229  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.295236  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.299059  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.299985  385781 pod_ready.go:92] pod "kube-proxy-5vc66" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:54.300019  385781 pod_ready.go:81] duration metric: took 190.37877ms for pod "kube-proxy-5vc66" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.300031  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pcbq6" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.495509  385781 request.go:629] Waited for 195.397939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcbq6
	I0408 11:37:54.495608  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pcbq6
	I0408 11:37:54.495622  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.495630  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.495634  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.499780  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:54.695453  385781 request.go:629] Waited for 194.921573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:54.695553  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:54.695565  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.695579  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.695586  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.698943  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:54.699650  385781 pod_ready.go:92] pod "kube-proxy-pcbq6" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:54.699674  385781 pod_ready.go:81] duration metric: took 399.635169ms for pod "kube-proxy-pcbq6" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.699707  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v98zm" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:54.895813  385781 request.go:629] Waited for 196.022595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v98zm
	I0408 11:37:54.895923  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v98zm
	I0408 11:37:54.895933  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:54.895940  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:54.895944  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:54.899759  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:55.095933  385781 request.go:629] Waited for 195.398867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:55.096018  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:55.096025  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.096035  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.096044  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.099980  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:55.100822  385781 pod_ready.go:92] pod "kube-proxy-v98zm" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:55.100845  385781 pod_ready.go:81] duration metric: took 401.128262ms for pod "kube-proxy-v98zm" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.100862  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.295961  385781 request.go:629] Waited for 195.008095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604
	I0408 11:37:55.296058  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604
	I0408 11:37:55.296064  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.296071  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.296075  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.300155  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:55.495389  385781 request.go:629] Waited for 194.373056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:55.495460  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604
	I0408 11:37:55.495465  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.495472  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.495477  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.499329  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:55.500126  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:55.500150  385781 pod_ready.go:81] duration metric: took 399.277428ms for pod "kube-scheduler-ha-438604" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.500161  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.695158  385781 request.go:629] Waited for 194.909862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m02
	I0408 11:37:55.695232  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m02
	I0408 11:37:55.695238  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.695243  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.695247  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.699042  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:55.895380  385781 request.go:629] Waited for 195.416353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:55.895475  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m02
	I0408 11:37:55.895484  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:55.895493  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:55.895500  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:55.899678  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:55.900234  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:55.900255  385781 pod_ready.go:81] duration metric: took 400.086899ms for pod "kube-scheduler-ha-438604-m02" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:55.900265  385781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:56.095438  385781 request.go:629] Waited for 195.060495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m03
	I0408 11:37:56.095512  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-438604-m03
	I0408 11:37:56.095517  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.095524  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.095529  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.099919  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:56.295002  385781 request.go:629] Waited for 193.443696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:56.295125  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes/ha-438604-m03
	I0408 11:37:56.295138  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.295148  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.295158  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.299096  385781 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0408 11:37:56.299906  385781 pod_ready.go:92] pod "kube-scheduler-ha-438604-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 11:37:56.299931  385781 pod_ready.go:81] duration metric: took 399.658719ms for pod "kube-scheduler-ha-438604-m03" in "kube-system" namespace to be "Ready" ...
	I0408 11:37:56.299947  385781 pod_ready.go:38] duration metric: took 9.600847352s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 11:37:56.299975  385781 api_server.go:52] waiting for apiserver process to appear ...
	I0408 11:37:56.300050  385781 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:37:56.316890  385781 api_server.go:72] duration metric: took 16.877983147s to wait for apiserver process to appear ...
	I0408 11:37:56.316924  385781 api_server.go:88] waiting for apiserver healthz status ...
	I0408 11:37:56.316952  385781 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I0408 11:37:56.323765  385781 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I0408 11:37:56.323859  385781 round_trippers.go:463] GET https://192.168.39.99:8443/version
	I0408 11:37:56.323870  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.323882  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.323898  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.325018  385781 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0408 11:37:56.325105  385781 api_server.go:141] control plane version: v1.29.3
	I0408 11:37:56.325123  385781 api_server.go:131] duration metric: took 8.190044ms to wait for apiserver health ...
	I0408 11:37:56.325142  385781 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 11:37:56.495649  385781 request.go:629] Waited for 170.409619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:56.495731  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:56.495738  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.495745  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.495750  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.503560  385781 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0408 11:37:56.510633  385781 system_pods.go:59] 24 kube-system pods found
	I0408 11:37:56.510670  385781 system_pods.go:61] "coredns-76f75df574-7gpzq" [761f75a2-9b79-4c09-b91d-5b031e0688d4] Running
	I0408 11:37:56.510675  385781 system_pods.go:61] "coredns-76f75df574-wqrvc" [39dd6d41-947e-4b4f-85a8-99fd88d1f4d0] Running
	I0408 11:37:56.510679  385781 system_pods.go:61] "etcd-ha-438604" [6bc23e20-ec88-43c2-9f22-27b0f35e324d] Running
	I0408 11:37:56.510682  385781 system_pods.go:61] "etcd-ha-438604-m02" [f1ef4ec0-f88d-4761-818a-0bc965fba5b5] Running
	I0408 11:37:56.510686  385781 system_pods.go:61] "etcd-ha-438604-m03" [297e3d28-7d53-418e-9467-a3e167d27686] Running
	I0408 11:37:56.510689  385781 system_pods.go:61] "kindnet-82krw" [8d313f06-523c-446e-a047-640980b34c0e] Running
	I0408 11:37:56.510691  385781 system_pods.go:61] "kindnet-b5ztk" [dbc5294d-91d1-4ed1-880a-963b794e15b8] Running
	I0408 11:37:56.510694  385781 system_pods.go:61] "kindnet-dg6vt" [08b93d6c-a55d-481d-9a53-39aaab016531] Running
	I0408 11:37:56.510701  385781 system_pods.go:61] "kube-apiserver-ha-438604" [afa3acb0-ad88-4559-a03d-389e2e954808] Running
	I0408 11:37:56.510704  385781 system_pods.go:61] "kube-apiserver-ha-438604-m02" [85c68c5f-7b60-4f96-aba0-26ea6fc4541f] Running
	I0408 11:37:56.510707  385781 system_pods.go:61] "kube-apiserver-ha-438604-m03" [26bcb7f0-b36e-486f-92c5-704d8aacc4a9] Running
	I0408 11:37:56.510713  385781 system_pods.go:61] "kube-controller-manager-ha-438604" [f0e0f607-8d5e-4e7e-9be5-953f1fe9851a] Running
	I0408 11:37:56.510717  385781 system_pods.go:61] "kube-controller-manager-ha-438604-m02" [d61b1c7c-a0bd-409a-9a22-bc3a134d326f] Running
	I0408 11:37:56.510720  385781 system_pods.go:61] "kube-controller-manager-ha-438604-m03" [ac6d4002-24bc-42d7-b683-20c3e6ec248b] Running
	I0408 11:37:56.510725  385781 system_pods.go:61] "kube-proxy-5vc66" [a3ad0806-73e4-4275-8564-9972a8176fe5] Running
	I0408 11:37:56.510728  385781 system_pods.go:61] "kube-proxy-pcbq6" [0af7d53e-ffe2-4c81-8d19-ff9e103795d2] Running
	I0408 11:37:56.510734  385781 system_pods.go:61] "kube-proxy-v98zm" [b430193d-b6ab-442c-8ff3-12a8f2c144b9] Running
	I0408 11:37:56.510737  385781 system_pods.go:61] "kube-scheduler-ha-438604" [5a772319-a32f-45eb-bd2e-aa7b4a3f31a5] Running
	I0408 11:37:56.510740  385781 system_pods.go:61] "kube-scheduler-ha-438604-m02" [79dc8511-c13f-4bd2-bdb1-d2db88601ef7] Running
	I0408 11:37:56.510743  385781 system_pods.go:61] "kube-scheduler-ha-438604-m03" [de828024-561c-4f5c-b161-9071f65c9090] Running
	I0408 11:37:56.510746  385781 system_pods.go:61] "kube-vip-ha-438604" [e1ddf46e-d497-49ba-97bc-bc23a32be91a] Running
	I0408 11:37:56.510752  385781 system_pods.go:61] "kube-vip-ha-438604-m02" [6998ad26-26d5-497f-97bd-e816a17444f6] Running
	I0408 11:37:56.510754  385781 system_pods.go:61] "kube-vip-ha-438604-m03" [4c4def5d-6239-411f-9126-32118b23d25d] Running
	I0408 11:37:56.510757  385781 system_pods.go:61] "storage-provisioner" [46a902f5-0192-4a86-bfe4-4b4d663402c1] Running
	I0408 11:37:56.510764  385781 system_pods.go:74] duration metric: took 185.612947ms to wait for pod list to return data ...
	I0408 11:37:56.510774  385781 default_sa.go:34] waiting for default service account to be created ...
	I0408 11:37:56.695150  385781 request.go:629] Waited for 184.289452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I0408 11:37:56.695236  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/default/serviceaccounts
	I0408 11:37:56.695245  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.695257  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.695270  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.698252  385781 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0408 11:37:56.698421  385781 default_sa.go:45] found service account: "default"
	I0408 11:37:56.698446  385781 default_sa.go:55] duration metric: took 187.661878ms for default service account to be created ...
	I0408 11:37:56.698459  385781 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 11:37:56.895773  385781 request.go:629] Waited for 197.220291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:56.895855  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/namespaces/kube-system/pods
	I0408 11:37:56.895863  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:56.895872  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:56.895877  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:56.903591  385781 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0408 11:37:56.910251  385781 system_pods.go:86] 24 kube-system pods found
	I0408 11:37:56.910283  385781 system_pods.go:89] "coredns-76f75df574-7gpzq" [761f75a2-9b79-4c09-b91d-5b031e0688d4] Running
	I0408 11:37:56.910288  385781 system_pods.go:89] "coredns-76f75df574-wqrvc" [39dd6d41-947e-4b4f-85a8-99fd88d1f4d0] Running
	I0408 11:37:56.910293  385781 system_pods.go:89] "etcd-ha-438604" [6bc23e20-ec88-43c2-9f22-27b0f35e324d] Running
	I0408 11:37:56.910298  385781 system_pods.go:89] "etcd-ha-438604-m02" [f1ef4ec0-f88d-4761-818a-0bc965fba5b5] Running
	I0408 11:37:56.910302  385781 system_pods.go:89] "etcd-ha-438604-m03" [297e3d28-7d53-418e-9467-a3e167d27686] Running
	I0408 11:37:56.910306  385781 system_pods.go:89] "kindnet-82krw" [8d313f06-523c-446e-a047-640980b34c0e] Running
	I0408 11:37:56.910310  385781 system_pods.go:89] "kindnet-b5ztk" [dbc5294d-91d1-4ed1-880a-963b794e15b8] Running
	I0408 11:37:56.910314  385781 system_pods.go:89] "kindnet-dg6vt" [08b93d6c-a55d-481d-9a53-39aaab016531] Running
	I0408 11:37:56.910317  385781 system_pods.go:89] "kube-apiserver-ha-438604" [afa3acb0-ad88-4559-a03d-389e2e954808] Running
	I0408 11:37:56.910321  385781 system_pods.go:89] "kube-apiserver-ha-438604-m02" [85c68c5f-7b60-4f96-aba0-26ea6fc4541f] Running
	I0408 11:37:56.910326  385781 system_pods.go:89] "kube-apiserver-ha-438604-m03" [26bcb7f0-b36e-486f-92c5-704d8aacc4a9] Running
	I0408 11:37:56.910331  385781 system_pods.go:89] "kube-controller-manager-ha-438604" [f0e0f607-8d5e-4e7e-9be5-953f1fe9851a] Running
	I0408 11:37:56.910337  385781 system_pods.go:89] "kube-controller-manager-ha-438604-m02" [d61b1c7c-a0bd-409a-9a22-bc3a134d326f] Running
	I0408 11:37:56.910344  385781 system_pods.go:89] "kube-controller-manager-ha-438604-m03" [ac6d4002-24bc-42d7-b683-20c3e6ec248b] Running
	I0408 11:37:56.910349  385781 system_pods.go:89] "kube-proxy-5vc66" [a3ad0806-73e4-4275-8564-9972a8176fe5] Running
	I0408 11:37:56.910355  385781 system_pods.go:89] "kube-proxy-pcbq6" [0af7d53e-ffe2-4c81-8d19-ff9e103795d2] Running
	I0408 11:37:56.910362  385781 system_pods.go:89] "kube-proxy-v98zm" [b430193d-b6ab-442c-8ff3-12a8f2c144b9] Running
	I0408 11:37:56.910367  385781 system_pods.go:89] "kube-scheduler-ha-438604" [5a772319-a32f-45eb-bd2e-aa7b4a3f31a5] Running
	I0408 11:37:56.910373  385781 system_pods.go:89] "kube-scheduler-ha-438604-m02" [79dc8511-c13f-4bd2-bdb1-d2db88601ef7] Running
	I0408 11:37:56.910387  385781 system_pods.go:89] "kube-scheduler-ha-438604-m03" [de828024-561c-4f5c-b161-9071f65c9090] Running
	I0408 11:37:56.910393  385781 system_pods.go:89] "kube-vip-ha-438604" [e1ddf46e-d497-49ba-97bc-bc23a32be91a] Running
	I0408 11:37:56.910396  385781 system_pods.go:89] "kube-vip-ha-438604-m02" [6998ad26-26d5-497f-97bd-e816a17444f6] Running
	I0408 11:37:56.910400  385781 system_pods.go:89] "kube-vip-ha-438604-m03" [4c4def5d-6239-411f-9126-32118b23d25d] Running
	I0408 11:37:56.910406  385781 system_pods.go:89] "storage-provisioner" [46a902f5-0192-4a86-bfe4-4b4d663402c1] Running
	I0408 11:37:56.910414  385781 system_pods.go:126] duration metric: took 211.945737ms to wait for k8s-apps to be running ...
	I0408 11:37:56.910422  385781 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 11:37:56.910482  385781 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:37:56.926913  385781 system_svc.go:56] duration metric: took 16.478436ms WaitForService to wait for kubelet
	I0408 11:37:56.926957  385781 kubeadm.go:576] duration metric: took 17.488052693s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:37:56.926988  385781 node_conditions.go:102] verifying NodePressure condition ...
	I0408 11:37:57.095546  385781 request.go:629] Waited for 168.454408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.99:8443/api/v1/nodes
	I0408 11:37:57.095646  385781 round_trippers.go:463] GET https://192.168.39.99:8443/api/v1/nodes
	I0408 11:37:57.095653  385781 round_trippers.go:469] Request Headers:
	I0408 11:37:57.095664  385781 round_trippers.go:473]     Accept: application/json, */*
	I0408 11:37:57.095676  385781 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0408 11:37:57.100191  385781 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0408 11:37:57.101270  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:37:57.101292  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:37:57.101306  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:37:57.101311  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:37:57.101315  385781 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 11:37:57.101320  385781 node_conditions.go:123] node cpu capacity is 2
	I0408 11:37:57.101326  385781 node_conditions.go:105] duration metric: took 174.330595ms to run NodePressure ...
	I0408 11:37:57.101341  385781 start.go:240] waiting for startup goroutines ...
	I0408 11:37:57.101381  385781 start.go:254] writing updated cluster config ...
	I0408 11:37:57.101720  385781 ssh_runner.go:195] Run: rm -f paused
	I0408 11:37:57.156943  385781 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 11:37:57.159179  385781 out.go:177] * Done! kubectl is now configured to use "ha-438604" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.875423491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576553875398009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47f46bf0-c82f-4eb7-8ce5-69466fd27c5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.877026918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4e04a2a-f858-46c5-bda8-47316d8e1a12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.877173401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4e04a2a-f858-46c5-bda8-47316d8e1a12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.877683309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576281566752611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b72573fcec35080dfc800587a49f43ad76e5e413bec68aeb950487ccfc97c8f,PodSandboxId:9ee40d0739885de8948ab286b71dbf2d89418dc80826c8e917a3a8c3320cbe0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712576104286401923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104333168376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104268144313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-9
47e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654,PodSandboxId:53cebe3e8c9222630a93080649e6ed84fe546db2d2d2f625da73c1cb773826d2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576
102512467738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576102380866535,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861,PodSandboxId:557d89a392b8e2c856f1c7d3cdf070fbed5126b5c0c405914d84d5440f923b01,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576084241856409,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8d71cc26b4f451de333cbadc95adb,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576081183108650,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576081137280449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842,PodSandboxId:29c075b954bee4fb24b831041c932c844733cdc3887d1dd27a86cffd21994ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576081214828311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef,PodSandboxId:e2170626fdc1cc13f50b661d90d931004f6b1ebc0fd146041f5600617517e81d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576081109163853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4e04a2a-f858-46c5-bda8-47316d8e1a12 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.919410523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46ed0b3e-da3e-4910-95c2-e58859a1c800 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.919494761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46ed0b3e-da3e-4910-95c2-e58859a1c800 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.921043110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2934d96b-81e0-400f-be05-760589485b31 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.921587268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576553921510391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2934d96b-81e0-400f-be05-760589485b31 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.922266768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=568d1e9a-46e7-4963-abc3-5fb87e33388d name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.922325882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=568d1e9a-46e7-4963-abc3-5fb87e33388d name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.922638759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576281566752611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b72573fcec35080dfc800587a49f43ad76e5e413bec68aeb950487ccfc97c8f,PodSandboxId:9ee40d0739885de8948ab286b71dbf2d89418dc80826c8e917a3a8c3320cbe0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712576104286401923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104333168376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104268144313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-9
47e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654,PodSandboxId:53cebe3e8c9222630a93080649e6ed84fe546db2d2d2f625da73c1cb773826d2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576
102512467738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576102380866535,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861,PodSandboxId:557d89a392b8e2c856f1c7d3cdf070fbed5126b5c0c405914d84d5440f923b01,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576084241856409,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8d71cc26b4f451de333cbadc95adb,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576081183108650,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576081137280449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842,PodSandboxId:29c075b954bee4fb24b831041c932c844733cdc3887d1dd27a86cffd21994ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576081214828311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef,PodSandboxId:e2170626fdc1cc13f50b661d90d931004f6b1ebc0fd146041f5600617517e81d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576081109163853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=568d1e9a-46e7-4963-abc3-5fb87e33388d name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.968449839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58324d2e-4106-4ce5-adca-ccba831ce929 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.968584891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58324d2e-4106-4ce5-adca-ccba831ce929 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.970122995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0702fbaf-1d9a-4c7d-bdeb-6614ac5ba142 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.970738997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576553970711193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0702fbaf-1d9a-4c7d-bdeb-6614ac5ba142 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.971331995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=381c3cb5-9ee6-4ff4-835a-7fafb521aad5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.971387584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=381c3cb5-9ee6-4ff4-835a-7fafb521aad5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:33 ha-438604 crio[685]: time="2024-04-08 11:42:33.971686881Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576281566752611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b72573fcec35080dfc800587a49f43ad76e5e413bec68aeb950487ccfc97c8f,PodSandboxId:9ee40d0739885de8948ab286b71dbf2d89418dc80826c8e917a3a8c3320cbe0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712576104286401923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104333168376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104268144313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-9
47e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654,PodSandboxId:53cebe3e8c9222630a93080649e6ed84fe546db2d2d2f625da73c1cb773826d2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576
102512467738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576102380866535,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861,PodSandboxId:557d89a392b8e2c856f1c7d3cdf070fbed5126b5c0c405914d84d5440f923b01,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576084241856409,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8d71cc26b4f451de333cbadc95adb,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576081183108650,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576081137280449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842,PodSandboxId:29c075b954bee4fb24b831041c932c844733cdc3887d1dd27a86cffd21994ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576081214828311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef,PodSandboxId:e2170626fdc1cc13f50b661d90d931004f6b1ebc0fd146041f5600617517e81d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576081109163853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=381c3cb5-9ee6-4ff4-835a-7fafb521aad5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:34 ha-438604 crio[685]: time="2024-04-08 11:42:34.014877299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abb19ae2-2357-40db-804a-cdaa3eec0643 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:42:34 ha-438604 crio[685]: time="2024-04-08 11:42:34.014980479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abb19ae2-2357-40db-804a-cdaa3eec0643 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:42:34 ha-438604 crio[685]: time="2024-04-08 11:42:34.016297422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60a56550-838e-446f-bbcf-50e34d4c7131 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:42:34 ha-438604 crio[685]: time="2024-04-08 11:42:34.016796516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576554016770137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60a56550-838e-446f-bbcf-50e34d4c7131 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:42:34 ha-438604 crio[685]: time="2024-04-08 11:42:34.017426098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05389198-f00d-4789-b7b0-3663519a1b22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:34 ha-438604 crio[685]: time="2024-04-08 11:42:34.017591420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05389198-f00d-4789-b7b0-3663519a1b22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:42:34 ha-438604 crio[685]: time="2024-04-08 11:42:34.017873683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576281566752611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b72573fcec35080dfc800587a49f43ad76e5e413bec68aeb950487ccfc97c8f,PodSandboxId:9ee40d0739885de8948ab286b71dbf2d89418dc80826c8e917a3a8c3320cbe0c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712576104286401923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104333168376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576104268144313,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-9
47e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654,PodSandboxId:53cebe3e8c9222630a93080649e6ed84fe546db2d2d2f625da73c1cb773826d2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576
102512467738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576102380866535,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861,PodSandboxId:557d89a392b8e2c856f1c7d3cdf070fbed5126b5c0c405914d84d5440f923b01,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576084241856409,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8d71cc26b4f451de333cbadc95adb,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576081183108650,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576081137280449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842,PodSandboxId:29c075b954bee4fb24b831041c932c844733cdc3887d1dd27a86cffd21994ebb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576081214828311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef,PodSandboxId:e2170626fdc1cc13f50b661d90d931004f6b1ebc0fd146041f5600617517e81d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576081109163853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05389198-f00d-4789-b7b0-3663519a1b22 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11b291bd9a246       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   76f708d0734ed       busybox-7fdf7869d9-cdh5l
	f0cafcafceece       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   acf17bfe1f043       coredns-76f75df574-7gpzq
	0b72573fcec35       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   9ee40d0739885       storage-provisioner
	63c0e178c3e78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   328834ce582ca       coredns-76f75df574-wqrvc
	557462b300c32       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   53cebe3e8c922       kindnet-82krw
	a0bffd365d14f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago       Running             kube-proxy                0                   ffe693490c6c3       kube-proxy-v98zm
	b2d05e909b1dd       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   557d89a392b8e       kube-vip-ha-438604
	677d8d8c878cc       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago       Running             kube-apiserver            0                   29c075b954bee       kube-apiserver-ha-438604
	982252ef21b29       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago       Running             kube-scheduler            0                   95acb68b16e77       kube-scheduler-ha-438604
	532fccde459b9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   681a212174e36       etcd-ha-438604
	3f52ec6258fa2       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago       Running             kube-controller-manager   0                   e2170626fdc1c       kube-controller-manager-ha-438604
	
	
	==> coredns [63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938] <==
	[INFO] 10.244.1.2:35295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151396s
	[INFO] 10.244.1.2:60373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.00071102s
	[INFO] 10.244.1.2:45844 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001995968s
	[INFO] 10.244.0.4:35463 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000243774s
	[INFO] 10.244.0.4:39312 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164775s
	[INFO] 10.244.2.2:45779 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188488s
	[INFO] 10.244.2.2:55046 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001548285s
	[INFO] 10.244.2.2:39734 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001381546s
	[INFO] 10.244.2.2:60648 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00017788s
	[INFO] 10.244.1.2:50535 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012891s
	[INFO] 10.244.1.2:34893 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001689023s
	[INFO] 10.244.1.2:54572 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121059s
	[INFO] 10.244.0.4:55733 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248755s
	[INFO] 10.244.0.4:44663 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046479s
	[INFO] 10.244.2.2:43313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161932s
	[INFO] 10.244.2.2:36056 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115719s
	[INFO] 10.244.2.2:58531 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248815s
	[INFO] 10.244.1.2:40849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115353s
	[INFO] 10.244.1.2:51289 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105404s
	[INFO] 10.244.1.2:56814 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070319s
	[INFO] 10.244.0.4:35492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160626s
	[INFO] 10.244.0.4:34374 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082632s
	[INFO] 10.244.2.2:43756 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109569s
	[INFO] 10.244.2.2:45152 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124387s
	[INFO] 10.244.1.2:38830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135636s
	
	
	==> coredns [f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d] <==
	[INFO] 10.244.0.4:59845 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002631659s
	[INFO] 10.244.0.4:58127 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000184769s
	[INFO] 10.244.0.4:40273 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002164716s
	[INFO] 10.244.0.4:44675 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011526s
	[INFO] 10.244.0.4:52644 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122913s
	[INFO] 10.244.2.2:49571 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135207s
	[INFO] 10.244.2.2:54106 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016012s
	[INFO] 10.244.2.2:33817 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024307s
	[INFO] 10.244.2.2:53777 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096848s
	[INFO] 10.244.1.2:51257 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001970096s
	[INFO] 10.244.1.2:37927 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164729s
	[INFO] 10.244.1.2:46840 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074025s
	[INFO] 10.244.1.2:40034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116336s
	[INFO] 10.244.1.2:46524 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110431s
	[INFO] 10.244.0.4:47504 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116612s
	[INFO] 10.244.0.4:52704 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105138s
	[INFO] 10.244.2.2:40699 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000199266s
	[INFO] 10.244.1.2:46666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009956s
	[INFO] 10.244.0.4:57492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119263s
	[INFO] 10.244.0.4:45362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139004s
	[INFO] 10.244.2.2:58706 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239864s
	[INFO] 10.244.2.2:32981 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128008s
	[INFO] 10.244.1.2:38182 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167786s
	[INFO] 10.244.1.2:44324 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206004s
	[INFO] 10.244.1.2:37810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013702s
	
	
	==> describe nodes <==
	Name:               ha-438604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T11_34_48_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:34:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:42:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:38:22 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:38:22 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:38:22 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:38:22 +0000   Mon, 08 Apr 2024 11:35:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    ha-438604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d242cef9ed484660b2c31aeed7e51ff5
	  System UUID:                d242cef9-ed48-4660-b2c3-1aeed7e51ff5
	  Boot ID:                    336ee057-2212-4601-ad25-56ebfd2bc06e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-cdh5l             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 coredns-76f75df574-7gpzq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m34s
	  kube-system                 coredns-76f75df574-wqrvc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m34s
	  kube-system                 etcd-ha-438604                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m46s
	  kube-system                 kindnet-82krw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m34s
	  kube-system                 kube-apiserver-ha-438604             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 kube-controller-manager-ha-438604    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 kube-proxy-v98zm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-scheduler-ha-438604             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 kube-vip-ha-438604                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m31s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m54s (x7 over 7m54s)  kubelet          Node ha-438604 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m54s (x8 over 7m54s)  kubelet          Node ha-438604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m54s (x8 over 7m54s)  kubelet          Node ha-438604 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m46s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m46s                  kubelet          Node ha-438604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m46s                  kubelet          Node ha-438604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m46s                  kubelet          Node ha-438604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m35s                  node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal  NodeReady                7m31s                  kubelet          Node ha-438604 status is now: NodeReady
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	
	
	Name:               ha-438604-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_36_27_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:36:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:39:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Apr 2024 11:38:26 +0000   Mon, 08 Apr 2024 11:39:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Apr 2024 11:38:26 +0000   Mon, 08 Apr 2024 11:39:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Apr 2024 11:38:26 +0000   Mon, 08 Apr 2024 11:39:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Apr 2024 11:38:26 +0000   Mon, 08 Apr 2024 11:39:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    ha-438604-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 957d2c54c49c48d0b297f4467d1bac27
	  System UUID:                957d2c54-c49c-48d0-b297-f4467d1bac27
	  Boot ID:                    4a3bfa74-44c6-4743-beca-7f47225d1d75
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jz4h9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 etcd-ha-438604-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m10s
	  kube-system                 kindnet-b5ztk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m11s
	  kube-system                 kube-apiserver-ha-438604-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-controller-manager-ha-438604-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-proxy-5vc66                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-scheduler-ha-438604-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-vip-ha-438604-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m7s                   kube-proxy       
	  Normal  Starting                 6m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m11s (x2 over 6m11s)  kubelet          Node ha-438604-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x2 over 6m11s)  kubelet          Node ha-438604-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s (x2 over 6m11s)  kubelet          Node ha-438604-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  NodeReady                6m                     kubelet          Node ha-438604-m02 status is now: NodeReady
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  NodeNotReady             2m47s                  node-controller  Node ha-438604-m02 status is now: NodeNotReady
	
	
	Name:               ha-438604-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_37_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:37:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:42:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:38:05 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:38:05 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:38:05 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:38:05 +0000   Mon, 08 Apr 2024 11:37:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    ha-438604-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27ce6086b0b04606902de8def056d57d
	  System UUID:                27ce6086-b0b0-4606-902d-e8def056d57d
	  Boot ID:                    e16126c1-ef05-4bfe-9505-165bab469df6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-gk5bx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 etcd-ha-438604-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kindnet-dg6vt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m
	  kube-system                 kube-apiserver-ha-438604-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-controller-manager-ha-438604-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-pcbq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-scheduler-ha-438604-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-vip-ha-438604-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 4m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)  kubelet          Node ha-438604-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)  kubelet          Node ha-438604-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x7 over 5m)  kubelet          Node ha-438604-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m58s            node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal  RegisteredNode           4m55s            node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal  RegisteredNode           4m42s            node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	
	
	Name:               ha-438604-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_38_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:38:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:42:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:39:08 +0000   Mon, 08 Apr 2024 11:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:39:08 +0000   Mon, 08 Apr 2024 11:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:39:08 +0000   Mon, 08 Apr 2024 11:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:39:08 +0000   Mon, 08 Apr 2024 11:38:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-438604-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0df153c018eb4bd3bce7e2132da5651e
	  System UUID:                0df153c0-18eb-4bd3-bce7-e2132da5651e
	  Boot ID:                    d3414940-ef8a-4f31-9dec-601fdd6541e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8rrcs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m56s
	  kube-system                 kube-proxy-2vmwq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x3 over 3m57s)  kubelet          Node ha-438604-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x3 over 3m57s)  kubelet          Node ha-438604-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x3 over 3m57s)  kubelet          Node ha-438604-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal  NodeReady                3m46s                  kubelet          Node ha-438604-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr 8 11:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054164] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042688] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.558260] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.775339] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.659050] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.215969] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.059868] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060056] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.165820] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.136183] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.312263] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.585051] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.065353] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.627952] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.807124] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.007059] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.588023] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[Apr 8 11:35] kauditd_printk_skb: 15 callbacks suppressed
	[Apr 8 11:36] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18] <==
	{"level":"warn","ts":"2024-04-08T11:42:34.339645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.349174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.351957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.359906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.36539Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.369726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.385841Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.396759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.425866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.431759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.453637Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.463726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.469405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.479268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.488015Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.49569Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.498504Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7ff681eaaadd5fcd","rtt":"1.045016ms","error":"dial tcp 192.168.39.219:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-04-08T11:42:34.498647Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7ff681eaaadd5fcd","rtt":"11.5172ms","error":"dial tcp 192.168.39.219:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-04-08T11:42:34.504925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.510249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.51439Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.521291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.529352Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.536905Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-08T11:42:34.552389Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3b7a74ffda0d9c54","from":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:42:34 up 8 min,  0 users,  load average: 0.45, 0.36, 0.19
	Linux ha-438604 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654] <==
	I0408 11:42:04.119475       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:42:14.134329       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:42:14.134441       1 main.go:227] handling current node
	I0408 11:42:14.134467       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:42:14.134484       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:42:14.134705       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:42:14.134745       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:42:14.134811       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:42:14.134830       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:42:24.147859       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:42:24.148009       1 main.go:227] handling current node
	I0408 11:42:24.148051       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:42:24.148079       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:42:24.148243       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:42:24.148285       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:42:24.148367       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:42:24.148399       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:42:34.165103       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:42:34.165131       1 main.go:227] handling current node
	I0408 11:42:34.165145       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:42:34.165150       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:42:34.165322       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:42:34.165336       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:42:34.165409       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:42:34.165418       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842] <==
	I0408 11:34:44.447203       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0408 11:34:44.447498       1 aggregator.go:165] initial CRD sync complete...
	I0408 11:34:44.447609       1 autoregister_controller.go:141] Starting autoregister controller
	I0408 11:34:44.447672       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 11:34:44.447696       1 cache.go:39] Caches are synced for autoregister controller
	I0408 11:34:44.450089       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0408 11:34:44.486863       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 11:34:44.531256       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0408 11:34:45.337337       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0408 11:34:45.345590       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0408 11:34:45.345625       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 11:34:46.059164       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 11:34:46.115853       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 11:34:46.343177       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0408 11:34:46.359981       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.99]
	I0408 11:34:46.361304       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 11:34:46.366968       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 11:34:46.383411       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0408 11:34:48.217897       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0408 11:34:48.237197       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0408 11:34:48.247479       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0408 11:35:00.313205       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0408 11:35:00.386408       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E0408 11:38:04.022997       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.99:39356->192.168.39.219:10250: write: broken pipe
	W0408 11:39:26.277123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94 192.168.39.99]
	
	
	==> kube-controller-manager [3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef] <==
	I0408 11:37:58.811744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="359.036525ms"
	E0408 11:37:58.811797       1 replica_set.go:557] sync "default/busybox-7fdf7869d9" failed with Operation cannot be fulfilled on replicasets.apps "busybox-7fdf7869d9": the object has been modified; please apply your changes to the latest version and try again
	I0408 11:37:58.811945       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="109.014µs"
	I0408 11:37:58.817952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="430.984µs"
	I0408 11:38:01.795183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="19.766662ms"
	I0408 11:38:01.795321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.204µs"
	I0408 11:38:01.879504       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="41.687187ms"
	I0408 11:38:01.879896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="107.994µs"
	I0408 11:38:02.348230       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.211236ms"
	I0408 11:38:02.348610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="255.762µs"
	E0408 11:38:37.684491       1 certificate_controller.go:146] Sync csr-mmx9w failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-mmx9w": the object has been modified; please apply your changes to the latest version and try again
	I0408 11:38:37.987772       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-438604-m04\" does not exist"
	I0408 11:38:38.012469       1 range_allocator.go:380] "Set node PodCIDR" node="ha-438604-m04" podCIDRs=["10.244.3.0/24"]
	I0408 11:38:38.036232       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7zhbb"
	I0408 11:38:38.036468       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8rrcs"
	I0408 11:38:38.217696       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-bpf4z"
	I0408 11:38:38.222508       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-l987x"
	I0408 11:38:38.270564       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-hv8lp"
	I0408 11:38:38.270615       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-7zhbb"
	I0408 11:38:39.583859       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-438604-m04"
	I0408 11:38:39.584018       1 event.go:376] "Event occurred" object="ha-438604-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller"
	I0408 11:38:48.493317       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-438604-m04"
	I0408 11:39:47.944280       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-438604-m04"
	I0408 11:39:48.036847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.577014ms"
	I0408 11:39:48.037153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="131.979µs"
	
	
	==> kube-proxy [a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119] <==
	I0408 11:35:02.658090       1 server_others.go:72] "Using iptables proxy"
	I0408 11:35:02.690993       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	I0408 11:35:02.734340       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 11:35:02.734424       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 11:35:02.734491       1 server_others.go:168] "Using iptables Proxier"
	I0408 11:35:02.738506       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 11:35:02.739305       1 server.go:865] "Version info" version="v1.29.3"
	I0408 11:35:02.739380       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:35:02.741210       1 config.go:188] "Starting service config controller"
	I0408 11:35:02.741459       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 11:35:02.741634       1 config.go:97] "Starting endpoint slice config controller"
	I0408 11:35:02.741670       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 11:35:02.742775       1 config.go:315] "Starting node config controller"
	I0408 11:35:02.742855       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 11:35:02.841944       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 11:35:02.842005       1 shared_informer.go:318] Caches are synced for service config
	I0408 11:35:02.843308       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a] <==
	W0408 11:34:45.455973       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 11:34:45.456066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 11:34:45.470402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 11:34:45.470455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 11:34:45.476434       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 11:34:45.476484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 11:34:45.550192       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 11:34:45.550244       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 11:34:45.590975       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 11:34:45.591244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 11:34:45.635661       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 11:34:45.635714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0408 11:34:45.715450       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 11:34:45.715630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 11:34:45.718741       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:34:45.718796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0408 11:34:48.193426       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0408 11:37:58.195651       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-jz4h9\": pod busybox-7fdf7869d9-jz4h9 is already assigned to node \"ha-438604-m02\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-jz4h9" node="ha-438604-m02"
	E0408 11:37:58.197761       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 4a6f771f-16dd-4c0c-8d7d-c435b6e95b4f(default/busybox-7fdf7869d9-jz4h9) wasn't assumed so cannot be forgotten"
	E0408 11:37:58.197988       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-jz4h9\": pod busybox-7fdf7869d9-jz4h9 is already assigned to node \"ha-438604-m02\"" pod="default/busybox-7fdf7869d9-jz4h9"
	I0408 11:37:58.198278       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-jz4h9" node="ha-438604-m02"
	E0408 11:37:58.253234       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-cdh5l\": pod busybox-7fdf7869d9-cdh5l is already assigned to node \"ha-438604\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-cdh5l" node="ha-438604"
	E0408 11:37:58.254055       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod a83c06a6-d809-4c17-a406-3f1d4b9cfaf7(default/busybox-7fdf7869d9-cdh5l) wasn't assumed so cannot be forgotten"
	E0408 11:37:58.254322       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-cdh5l\": pod busybox-7fdf7869d9-cdh5l is already assigned to node \"ha-438604\"" pod="default/busybox-7fdf7869d9-cdh5l"
	I0408 11:37:58.254493       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-cdh5l" node="ha-438604"
	
	
	==> kubelet <==
	Apr 08 11:37:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:37:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:37:58 ha-438604 kubelet[1376]: I0408 11:37:58.216755    1376 topology_manager.go:215] "Topology Admit Handler" podUID="a83c06a6-d809-4c17-a406-3f1d4b9cfaf7" podNamespace="default" podName="busybox-7fdf7869d9-cdh5l"
	Apr 08 11:37:58 ha-438604 kubelet[1376]: I0408 11:37:58.260908    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdv2\" (UniqueName: \"kubernetes.io/projected/a83c06a6-d809-4c17-a406-3f1d4b9cfaf7-kube-api-access-ncdv2\") pod \"busybox-7fdf7869d9-cdh5l\" (UID: \"a83c06a6-d809-4c17-a406-3f1d4b9cfaf7\") " pod="default/busybox-7fdf7869d9-cdh5l"
	Apr 08 11:38:02 ha-438604 kubelet[1376]: I0408 11:38:02.309128    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-cdh5l" podStartSLOduration=1.679330254 podStartE2EDuration="4.309015456s" podCreationTimestamp="2024-04-08 11:37:58 +0000 UTC" firstStartedPulling="2024-04-08 11:37:58.925614531 +0000 UTC m=+190.737965944" lastFinishedPulling="2024-04-08 11:38:01.555299729 +0000 UTC m=+193.367651146" observedRunningTime="2024-04-08 11:38:02.307680619 +0000 UTC m=+194.120032032" watchObservedRunningTime="2024-04-08 11:38:02.309015456 +0000 UTC m=+194.121366902"
	Apr 08 11:38:48 ha-438604 kubelet[1376]: E0408 11:38:48.498609    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:38:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:38:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:38:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:38:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:39:48 ha-438604 kubelet[1376]: E0408 11:39:48.492700    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:39:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:39:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:39:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:39:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:40:48 ha-438604 kubelet[1376]: E0408 11:40:48.492951    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:40:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:40:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:40:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:40:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:41:48 ha-438604 kubelet[1376]: E0408 11:41:48.491599    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:41:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:41:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:41:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:41:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-438604 -n ha-438604
helpers_test.go:261: (dbg) Run:  kubectl --context ha-438604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-438604 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-438604 -v=7 --alsologtostderr
E0408 11:43:06.833127  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:43:34.517256  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:43:44.542894  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-438604 -v=7 --alsologtostderr: exit status 82 (2m2.05520816s)

                                                
                                                
-- stdout --
	* Stopping node "ha-438604-m04"  ...
	* Stopping node "ha-438604-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:42:36.156870  391726 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:42:36.157014  391726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:36.157024  391726 out.go:304] Setting ErrFile to fd 2...
	I0408 11:42:36.157029  391726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:42:36.157243  391726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:42:36.157484  391726 out.go:298] Setting JSON to false
	I0408 11:42:36.157577  391726 mustload.go:65] Loading cluster: ha-438604
	I0408 11:42:36.157970  391726 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:42:36.158080  391726 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:42:36.158294  391726 mustload.go:65] Loading cluster: ha-438604
	I0408 11:42:36.158473  391726 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:42:36.158518  391726 stop.go:39] StopHost: ha-438604-m04
	I0408 11:42:36.158955  391726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:36.159013  391726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:36.176330  391726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0408 11:42:36.176911  391726 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:36.177612  391726 main.go:141] libmachine: Using API Version  1
	I0408 11:42:36.177645  391726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:36.178037  391726 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:36.181071  391726 out.go:177] * Stopping node "ha-438604-m04"  ...
	I0408 11:42:36.184981  391726 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0408 11:42:36.185051  391726 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:42:36.185463  391726 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0408 11:42:36.185500  391726 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:42:36.188811  391726 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:36.189280  391726 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:38:22 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:42:36.189317  391726 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:42:36.189517  391726 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:42:36.189718  391726 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:42:36.189885  391726 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:42:36.190023  391726 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:42:36.280157  391726 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0408 11:42:36.334350  391726 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0408 11:42:36.389426  391726 main.go:141] libmachine: Stopping "ha-438604-m04"...
	I0408 11:42:36.389470  391726 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:42:36.391198  391726 main.go:141] libmachine: (ha-438604-m04) Calling .Stop
	I0408 11:42:36.395227  391726 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 0/120
	I0408 11:42:37.694779  391726 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:42:37.696292  391726 main.go:141] libmachine: Machine "ha-438604-m04" was stopped.
	I0408 11:42:37.696316  391726 stop.go:75] duration metric: took 1.511345007s to stop
	I0408 11:42:37.696364  391726 stop.go:39] StopHost: ha-438604-m03
	I0408 11:42:37.696790  391726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:42:37.696846  391726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:42:37.712713  391726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
	I0408 11:42:37.713145  391726 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:42:37.713713  391726 main.go:141] libmachine: Using API Version  1
	I0408 11:42:37.713738  391726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:42:37.714095  391726 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:42:37.716569  391726 out.go:177] * Stopping node "ha-438604-m03"  ...
	I0408 11:42:37.718501  391726 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0408 11:42:37.718529  391726 main.go:141] libmachine: (ha-438604-m03) Calling .DriverName
	I0408 11:42:37.718830  391726 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0408 11:42:37.718858  391726 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHHostname
	I0408 11:42:37.721657  391726 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:37.722166  391726 main.go:141] libmachine: (ha-438604-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:7c:74", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:36:59 +0000 UTC Type:0 Mac:52:54:00:fa:7c:74 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-438604-m03 Clientid:01:52:54:00:fa:7c:74}
	I0408 11:42:37.722205  391726 main.go:141] libmachine: (ha-438604-m03) DBG | domain ha-438604-m03 has defined IP address 192.168.39.94 and MAC address 52:54:00:fa:7c:74 in network mk-ha-438604
	I0408 11:42:37.722357  391726 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHPort
	I0408 11:42:37.722569  391726 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHKeyPath
	I0408 11:42:37.722733  391726 main.go:141] libmachine: (ha-438604-m03) Calling .GetSSHUsername
	I0408 11:42:37.722884  391726 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m03/id_rsa Username:docker}
	I0408 11:42:37.812928  391726 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0408 11:42:37.867918  391726 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0408 11:42:37.924890  391726 main.go:141] libmachine: Stopping "ha-438604-m03"...
	I0408 11:42:37.924931  391726 main.go:141] libmachine: (ha-438604-m03) Calling .GetState
	I0408 11:42:37.926566  391726 main.go:141] libmachine: (ha-438604-m03) Calling .Stop
	I0408 11:42:37.930440  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 0/120
	I0408 11:42:38.932104  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 1/120
	I0408 11:42:39.933512  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 2/120
	I0408 11:42:40.934943  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 3/120
	I0408 11:42:41.936544  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 4/120
	I0408 11:42:42.938331  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 5/120
	I0408 11:42:43.940055  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 6/120
	I0408 11:42:44.942431  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 7/120
	I0408 11:42:45.943944  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 8/120
	I0408 11:42:46.945732  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 9/120
	I0408 11:42:47.948162  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 10/120
	I0408 11:42:48.950537  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 11/120
	I0408 11:42:49.952374  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 12/120
	I0408 11:42:50.954077  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 13/120
	I0408 11:42:51.955766  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 14/120
	I0408 11:42:52.957874  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 15/120
	I0408 11:42:53.959667  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 16/120
	I0408 11:42:54.961382  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 17/120
	I0408 11:42:55.963193  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 18/120
	I0408 11:42:56.964876  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 19/120
	I0408 11:42:57.967194  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 20/120
	I0408 11:42:58.968844  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 21/120
	I0408 11:42:59.970575  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 22/120
	I0408 11:43:00.972088  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 23/120
	I0408 11:43:01.973929  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 24/120
	I0408 11:43:02.976165  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 25/120
	I0408 11:43:03.978118  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 26/120
	I0408 11:43:04.980048  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 27/120
	I0408 11:43:05.981750  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 28/120
	I0408 11:43:06.983786  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 29/120
	I0408 11:43:07.986057  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 30/120
	I0408 11:43:08.987800  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 31/120
	I0408 11:43:09.989484  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 32/120
	I0408 11:43:10.991253  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 33/120
	I0408 11:43:11.992854  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 34/120
	I0408 11:43:12.994773  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 35/120
	I0408 11:43:13.996604  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 36/120
	I0408 11:43:14.998330  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 37/120
	I0408 11:43:16.000461  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 38/120
	I0408 11:43:17.002155  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 39/120
	I0408 11:43:18.004418  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 40/120
	I0408 11:43:19.006833  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 41/120
	I0408 11:43:20.008578  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 42/120
	I0408 11:43:21.010010  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 43/120
	I0408 11:43:22.011638  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 44/120
	I0408 11:43:23.013622  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 45/120
	I0408 11:43:24.015100  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 46/120
	I0408 11:43:25.016560  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 47/120
	I0408 11:43:26.018246  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 48/120
	I0408 11:43:27.020584  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 49/120
	I0408 11:43:28.022319  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 50/120
	I0408 11:43:29.023829  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 51/120
	I0408 11:43:30.025331  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 52/120
	I0408 11:43:31.027529  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 53/120
	I0408 11:43:32.029093  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 54/120
	I0408 11:43:33.031056  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 55/120
	I0408 11:43:34.032733  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 56/120
	I0408 11:43:35.034935  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 57/120
	I0408 11:43:36.036769  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 58/120
	I0408 11:43:37.038431  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 59/120
	I0408 11:43:38.040618  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 60/120
	I0408 11:43:39.043076  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 61/120
	I0408 11:43:40.044865  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 62/120
	I0408 11:43:41.046639  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 63/120
	I0408 11:43:42.048400  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 64/120
	I0408 11:43:43.050358  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 65/120
	I0408 11:43:44.052109  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 66/120
	I0408 11:43:45.053583  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 67/120
	I0408 11:43:46.055300  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 68/120
	I0408 11:43:47.057160  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 69/120
	I0408 11:43:48.059107  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 70/120
	I0408 11:43:49.060495  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 71/120
	I0408 11:43:50.062208  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 72/120
	I0408 11:43:51.063635  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 73/120
	I0408 11:43:52.065270  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 74/120
	I0408 11:43:53.067282  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 75/120
	I0408 11:43:54.068734  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 76/120
	I0408 11:43:55.070376  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 77/120
	I0408 11:43:56.072036  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 78/120
	I0408 11:43:57.073656  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 79/120
	I0408 11:43:58.075762  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 80/120
	I0408 11:43:59.077237  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 81/120
	I0408 11:44:00.078845  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 82/120
	I0408 11:44:01.080267  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 83/120
	I0408 11:44:02.081715  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 84/120
	I0408 11:44:03.083662  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 85/120
	I0408 11:44:04.085045  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 86/120
	I0408 11:44:05.087026  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 87/120
	I0408 11:44:06.088779  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 88/120
	I0408 11:44:07.091016  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 89/120
	I0408 11:44:08.092918  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 90/120
	I0408 11:44:09.094233  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 91/120
	I0408 11:44:10.095680  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 92/120
	I0408 11:44:11.096925  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 93/120
	I0408 11:44:12.098281  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 94/120
	I0408 11:44:13.100636  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 95/120
	I0408 11:44:14.102678  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 96/120
	I0408 11:44:15.104066  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 97/120
	I0408 11:44:16.106338  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 98/120
	I0408 11:44:17.107626  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 99/120
	I0408 11:44:18.109637  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 100/120
	I0408 11:44:19.111259  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 101/120
	I0408 11:44:20.112573  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 102/120
	I0408 11:44:21.113904  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 103/120
	I0408 11:44:22.115406  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 104/120
	I0408 11:44:23.116666  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 105/120
	I0408 11:44:24.117885  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 106/120
	I0408 11:44:25.119334  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 107/120
	I0408 11:44:26.120883  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 108/120
	I0408 11:44:27.122461  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 109/120
	I0408 11:44:28.124362  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 110/120
	I0408 11:44:29.126471  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 111/120
	I0408 11:44:30.128000  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 112/120
	I0408 11:44:31.130241  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 113/120
	I0408 11:44:32.131810  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 114/120
	I0408 11:44:33.133753  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 115/120
	I0408 11:44:34.135790  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 116/120
	I0408 11:44:35.137105  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 117/120
	I0408 11:44:36.138879  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 118/120
	I0408 11:44:37.140480  391726 main.go:141] libmachine: (ha-438604-m03) Waiting for machine to stop 119/120
	I0408 11:44:38.141381  391726 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0408 11:44:38.141474  391726 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0408 11:44:38.143636  391726 out.go:177] 
	W0408 11:44:38.145257  391726 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0408 11:44:38.145283  391726 out.go:239] * 
	* 
	W0408 11:44:38.148523  391726 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:44:38.149794  391726 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-438604 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-438604 --wait=true -v=7 --alsologtostderr
E0408 11:48:06.832402  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-438604 --wait=true -v=7 --alsologtostderr: (3m58.340434473s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-438604
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-438604 -n ha-438604
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-438604 logs -n 25: (2.155807817s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m02:/home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m02 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04:/home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m04 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp testdata/cp-test.txt                                                | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604:/home/docker/cp-test_ha-438604-m04_ha-438604.txt                       |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604 sudo cat                                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604.txt                                 |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m02:/home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m02 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03:/home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m03 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-438604 node stop m02 -v=7                                                     | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-438604 node start m02 -v=7                                                    | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-438604 -v=7                                                           | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-438604 -v=7                                                                | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-438604 --wait=true -v=7                                                    | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:44 UTC | 08 Apr 24 11:48 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-438604                                                                | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:48 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:44:38
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:44:38.214855  392192 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:44:38.215008  392192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:44:38.215021  392192 out.go:304] Setting ErrFile to fd 2...
	I0408 11:44:38.215027  392192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:44:38.215260  392192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:44:38.215899  392192 out.go:298] Setting JSON to false
	I0408 11:44:38.216957  392192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5221,"bootTime":1712571457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:44:38.217033  392192 start.go:139] virtualization: kvm guest
	I0408 11:44:38.219847  392192 out.go:177] * [ha-438604] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:44:38.221482  392192 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:44:38.221490  392192 notify.go:220] Checking for updates...
	I0408 11:44:38.222949  392192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:44:38.224423  392192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:44:38.226080  392192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:44:38.227515  392192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:44:38.228960  392192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:44:38.230638  392192 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:44:38.230748  392192 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:44:38.231259  392192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:44:38.231307  392192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:44:38.246610  392192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0408 11:44:38.247238  392192 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:44:38.247952  392192 main.go:141] libmachine: Using API Version  1
	I0408 11:44:38.247979  392192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:44:38.248485  392192 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:44:38.248721  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:44:38.285043  392192 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 11:44:38.286359  392192 start.go:297] selected driver: kvm2
	I0408 11:44:38.286378  392192 start.go:901] validating driver "kvm2" against &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:44:38.286557  392192 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:44:38.286882  392192 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:44:38.286950  392192 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:44:38.302294  392192 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:44:38.302973  392192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:44:38.303057  392192 cni.go:84] Creating CNI manager for ""
	I0408 11:44:38.303069  392192 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0408 11:44:38.303122  392192 start.go:340] cluster config:
	{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:44:38.303321  392192 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:44:38.305972  392192 out.go:177] * Starting "ha-438604" primary control-plane node in "ha-438604" cluster
	I0408 11:44:38.307458  392192 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:44:38.307508  392192 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 11:44:38.307522  392192 cache.go:56] Caching tarball of preloaded images
	I0408 11:44:38.307610  392192 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:44:38.307621  392192 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:44:38.307782  392192 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:44:38.308008  392192 start.go:360] acquireMachinesLock for ha-438604: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:44:38.308056  392192 start.go:364] duration metric: took 27.504µs to acquireMachinesLock for "ha-438604"
	I0408 11:44:38.308073  392192 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:44:38.308086  392192 fix.go:54] fixHost starting: 
	I0408 11:44:38.308335  392192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:44:38.308369  392192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:44:38.323450  392192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0408 11:44:38.324036  392192 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:44:38.324585  392192 main.go:141] libmachine: Using API Version  1
	I0408 11:44:38.324611  392192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:44:38.324998  392192 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:44:38.325235  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:44:38.325415  392192 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:44:38.327244  392192 fix.go:112] recreateIfNeeded on ha-438604: state=Running err=<nil>
	W0408 11:44:38.327262  392192 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:44:38.329239  392192 out.go:177] * Updating the running kvm2 "ha-438604" VM ...
	I0408 11:44:38.330300  392192 machine.go:94] provisionDockerMachine start ...
	I0408 11:44:38.330324  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:44:38.330586  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.333281  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.333750  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.333779  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.333968  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.334145  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.334296  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.334434  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.334616  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:44:38.334807  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:44:38.334823  392192 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 11:44:38.453369  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604
	
	I0408 11:44:38.453398  392192 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:44:38.453650  392192 buildroot.go:166] provisioning hostname "ha-438604"
	I0408 11:44:38.453677  392192 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:44:38.453906  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.456531  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.457064  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.457092  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.457297  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.457510  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.457694  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.457883  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.458081  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:44:38.458307  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:44:38.458323  392192 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-438604 && echo "ha-438604" | sudo tee /etc/hostname
	I0408 11:44:38.594814  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604
	
	I0408 11:44:38.594860  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.597842  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.598302  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.598338  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.598574  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.598822  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.599019  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.599145  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.599356  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:44:38.599534  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:44:38.599550  392192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-438604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-438604/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-438604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:44:38.717190  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:44:38.717228  392192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:44:38.717260  392192 buildroot.go:174] setting up certificates
	I0408 11:44:38.717273  392192 provision.go:84] configureAuth start
	I0408 11:44:38.717283  392192 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:44:38.717630  392192 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:44:38.720517  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.720984  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.721006  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.721223  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.723670  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.724073  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.724116  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.724273  392192 provision.go:143] copyHostCerts
	I0408 11:44:38.724305  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:44:38.724371  392192 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 11:44:38.724393  392192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:44:38.724481  392192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:44:38.724578  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:44:38.724601  392192 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 11:44:38.724608  392192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:44:38.724632  392192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:44:38.724736  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:44:38.724755  392192 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 11:44:38.724759  392192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:44:38.724784  392192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:44:38.724847  392192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.ha-438604 san=[127.0.0.1 192.168.39.99 ha-438604 localhost minikube]
	I0408 11:44:38.780631  392192 provision.go:177] copyRemoteCerts
	I0408 11:44:38.780706  392192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:44:38.780734  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.783636  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.784072  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.784105  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.784282  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.784502  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.784680  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.784802  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:44:38.877161  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 11:44:38.877228  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:44:38.905740  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 11:44:38.905829  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0408 11:44:38.935152  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 11:44:38.935264  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 11:44:38.965534  392192 provision.go:87] duration metric: took 248.246129ms to configureAuth
	I0408 11:44:38.965568  392192 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:44:38.965835  392192 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:44:38.965947  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.968421  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.968802  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.968828  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.969016  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.969222  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.969429  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.969596  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.969779  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:44:38.969999  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:44:38.970030  392192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:46:09.944554  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:46:09.944588  392192 machine.go:97] duration metric: took 1m31.61427192s to provisionDockerMachine
	I0408 11:46:09.944607  392192 start.go:293] postStartSetup for "ha-438604" (driver="kvm2")
	I0408 11:46:09.944621  392192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:46:09.944664  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:09.945026  392192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:46:09.945070  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:09.948349  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:09.948771  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:09.948796  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:09.948935  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:09.949149  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:09.949307  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:09.949427  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:46:10.041213  392192 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:46:10.046167  392192 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:46:10.046199  392192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:46:10.046288  392192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:46:10.046410  392192 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 11:46:10.046430  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 11:46:10.046531  392192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 11:46:10.057545  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:46:10.085298  392192 start.go:296] duration metric: took 140.652468ms for postStartSetup
	I0408 11:46:10.085390  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.085763  392192 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0408 11:46:10.085795  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:10.088447  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.088973  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.089008  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.089240  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:10.089476  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.089652  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:10.089847  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	W0408 11:46:10.178949  392192 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0408 11:46:10.179000  392192 fix.go:56] duration metric: took 1m31.870921572s for fixHost
	I0408 11:46:10.179033  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:10.181878  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.182292  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.182324  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.182507  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:10.182686  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.182868  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.182983  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:10.183158  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:46:10.183344  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:46:10.183355  392192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:46:10.296963  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576770.263978103
	
	I0408 11:46:10.296995  392192 fix.go:216] guest clock: 1712576770.263978103
	I0408 11:46:10.297010  392192 fix.go:229] Guest: 2024-04-08 11:46:10.263978103 +0000 UTC Remote: 2024-04-08 11:46:10.179010332 +0000 UTC m=+92.016999693 (delta=84.967771ms)
	I0408 11:46:10.297053  392192 fix.go:200] guest clock delta is within tolerance: 84.967771ms
	I0408 11:46:10.297066  392192 start.go:83] releasing machines lock for "ha-438604", held for 1m31.98899047s
	I0408 11:46:10.297092  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.297414  392192 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:46:10.300024  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.300505  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.300546  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.300689  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.301246  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.301425  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.301523  392192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:46:10.301574  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:10.301701  392192 ssh_runner.go:195] Run: cat /version.json
	I0408 11:46:10.301726  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:10.304350  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.304564  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.304780  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.304809  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.304959  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:10.305109  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.305150  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.305195  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.305278  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:10.305340  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:10.305471  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:46:10.305486  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.305708  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:10.305957  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:46:10.423598  392192 ssh_runner.go:195] Run: systemctl --version
	I0408 11:46:10.430911  392192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:46:10.604410  392192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:46:10.610820  392192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:46:10.610912  392192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:46:10.621291  392192 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 11:46:10.621322  392192 start.go:494] detecting cgroup driver to use...
	I0408 11:46:10.621396  392192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:46:10.640597  392192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:46:10.657011  392192 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:46:10.657097  392192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:46:10.673084  392192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:46:10.687745  392192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:46:10.845264  392192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:46:11.003546  392192 docker.go:233] disabling docker service ...
	I0408 11:46:11.003654  392192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:46:11.021251  392192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:46:11.036833  392192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:46:11.189970  392192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:46:11.352603  392192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:46:11.368881  392192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:46:11.390445  392192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:46:11.390508  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.402424  392192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:46:11.402499  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.413878  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.425475  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.436838  392192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:46:11.448452  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.459732  392192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.473134  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.485040  392192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:46:11.495447  392192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:46:11.506014  392192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:46:11.662136  392192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:46:12.363821  392192 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:46:12.363905  392192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:46:12.369549  392192 start.go:562] Will wait 60s for crictl version
	I0408 11:46:12.369613  392192 ssh_runner.go:195] Run: which crictl
	I0408 11:46:12.374037  392192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:46:12.416237  392192 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:46:12.416329  392192 ssh_runner.go:195] Run: crio --version
	I0408 11:46:12.447388  392192 ssh_runner.go:195] Run: crio --version
	I0408 11:46:12.481174  392192 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:46:12.482709  392192 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:46:12.486007  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:12.486502  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:12.486533  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:12.486697  392192 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:46:12.492148  392192 kubeadm.go:877] updating cluster {Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 11:46:12.492295  392192 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:46:12.492335  392192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:46:12.537662  392192 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 11:46:12.537690  392192 crio.go:433] Images already preloaded, skipping extraction
	I0408 11:46:12.537785  392192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:46:12.578052  392192 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 11:46:12.578078  392192 cache_images.go:84] Images are preloaded, skipping loading
	I0408 11:46:12.578089  392192 kubeadm.go:928] updating node { 192.168.39.99 8443 v1.29.3 crio true true} ...
	I0408 11:46:12.578291  392192 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-438604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:46:12.578369  392192 ssh_runner.go:195] Run: crio config
	I0408 11:46:12.633193  392192 cni.go:84] Creating CNI manager for ""
	I0408 11:46:12.633220  392192 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0408 11:46:12.633239  392192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 11:46:12.633268  392192 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-438604 NodeName:ha-438604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 11:46:12.633448  392192 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-438604"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 11:46:12.633476  392192 kube-vip.go:111] generating kube-vip config ...
	I0408 11:46:12.633539  392192 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 11:46:12.646199  392192 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0408 11:46:12.646352  392192 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 11:46:12.646413  392192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:46:12.657155  392192 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 11:46:12.657223  392192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0408 11:46:12.667553  392192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0408 11:46:12.685786  392192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:46:12.703921  392192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0408 11:46:12.722577  392192 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0408 11:46:12.739893  392192 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0408 11:46:12.745408  392192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:46:12.893401  392192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:46:12.909203  392192 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604 for IP: 192.168.39.99
	I0408 11:46:12.909232  392192 certs.go:194] generating shared ca certs ...
	I0408 11:46:12.909254  392192 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:46:12.909456  392192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:46:12.909518  392192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:46:12.909533  392192 certs.go:256] generating profile certs ...
	I0408 11:46:12.909632  392192 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key
	I0408 11:46:12.909667  392192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.45572550
	I0408 11:46:12.909691  392192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.45572550 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.219 192.168.39.94 192.168.39.254]
	I0408 11:46:13.107246  392192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.45572550 ...
	I0408 11:46:13.107293  392192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.45572550: {Name:mk2a66efb8e7b70b4d7242b919efe4d10dc76679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:46:13.107507  392192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.45572550 ...
	I0408 11:46:13.107527  392192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.45572550: {Name:mk61497c5d18ad8679ebd14d2f81bc2ffe59a139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:46:13.107630  392192 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.45572550 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt
	I0408 11:46:13.107840  392192 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.45572550 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key
	I0408 11:46:13.108027  392192 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key
	I0408 11:46:13.108057  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 11:46:13.108079  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 11:46:13.108100  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 11:46:13.108116  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 11:46:13.108135  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 11:46:13.108153  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 11:46:13.108173  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 11:46:13.108190  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 11:46:13.108255  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 11:46:13.108296  392192 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 11:46:13.108312  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:46:13.108348  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:46:13.108477  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:46:13.108511  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:46:13.108583  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:46:13.108621  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:46:13.108641  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 11:46:13.108663  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 11:46:13.109465  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:46:13.159383  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:46:13.200310  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:46:13.234594  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:46:13.260171  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0408 11:46:13.296092  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 11:46:13.323008  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:46:13.351189  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:46:13.378797  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:46:13.406048  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 11:46:13.432550  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 11:46:13.459903  392192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 11:46:13.478008  392192 ssh_runner.go:195] Run: openssl version
	I0408 11:46:13.484587  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:46:13.496485  392192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:46:13.501616  392192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:46:13.501694  392192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:46:13.507892  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:46:13.518325  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 11:46:13.530903  392192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 11:46:13.536020  392192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 11:46:13.536104  392192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 11:46:13.542665  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 11:46:13.554153  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 11:46:13.566387  392192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 11:46:13.571572  392192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 11:46:13.571643  392192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 11:46:13.577934  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 11:46:13.588432  392192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:46:13.593528  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 11:46:13.600017  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 11:46:13.606439  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 11:46:13.612594  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 11:46:13.618678  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 11:46:13.624800  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 11:46:13.630877  392192 kubeadm.go:391] StartCluster: {Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:46:13.631009  392192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 11:46:13.631068  392192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 11:46:13.673709  392192 cri.go:89] found id: "802aef6168e17b113cf37d1afff2ef056b19deb40792ecd9788195491f5badfa"
	I0408 11:46:13.673747  392192 cri.go:89] found id: "9e33b38740b9fc869c0e33947343ff68cb4602e28c71caa1b085888d2d0fa357"
	I0408 11:46:13.673753  392192 cri.go:89] found id: "af9e53f4b07b92152c87ad1055792e15e0ec552ccf4a0f9e8e52f169a1bfac1c"
	I0408 11:46:13.673758  392192 cri.go:89] found id: "7d36288c5678cf4e1e7f1ce02aa857b97f86081eea3b13e20bf674fd4833024b"
	I0408 11:46:13.673761  392192 cri.go:89] found id: "f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d"
	I0408 11:46:13.673766  392192 cri.go:89] found id: "63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938"
	I0408 11:46:13.673770  392192 cri.go:89] found id: "557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654"
	I0408 11:46:13.673774  392192 cri.go:89] found id: "a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119"
	I0408 11:46:13.673778  392192 cri.go:89] found id: "b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861"
	I0408 11:46:13.673786  392192 cri.go:89] found id: "677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842"
	I0408 11:46:13.673794  392192 cri.go:89] found id: "982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a"
	I0408 11:46:13.673798  392192 cri.go:89] found id: "532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18"
	I0408 11:46:13.673803  392192 cri.go:89] found id: "3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef"
	I0408 11:46:13.673806  392192 cri.go:89] found id: ""
	I0408 11:46:13.673848  392192 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.310605829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576917310501806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d61ba23-7b5d-4d1f-b3eb-d02d6dc2bc02 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.311309441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ac9206b-3273-4518-b69f-cad8d352ea0c name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.311383101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ac9206b-3273-4518-b69f-cad8d352ea0c name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.311909677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576836475144538,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f,PodSandboxId:3c1aadf18bdfa1179e1e537fc5d7dd9c19b5d890642da7289bfec4788968d5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712576831454445583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576822457186547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576814450411298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6157d29435bbd140396154bcd9a78ccfc4b40ae9381cc51d1eb08e43026426f9,PodSandboxId:2e231a74d5722d2dc80b47421dcc34ed5768ce11d297b1a89cf4004681bde2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576809741661444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b152d885c64b37aba281501555a42ab3960b7a0e04dd30e16cc149eec7b5fe3,PodSandboxId:6cdc04070cee5ab8a294241ae5279dce3da99ba59c4166837a27cff5205e60d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576791774674276,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70da15df0c8896fec7cbc5ca9d7e2bc4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a,PodSandboxId:627cb3e544fdbc7aed781848ba5949a92ce3b848cd7889d47056d06ecb2f89ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776675758874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada,PodSandboxId:f0bb641e0019f444ca48a09df23de7040077c28253d95ee1d6985fce66f0c49a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576776642725447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712576776719467539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc,PodSandboxId:e111e127bd599208816ee181bf633c85569caf139f7501bf9bd75933af43962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776740240052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712576776645024258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3,PodSandboxId:3f0df28139117d278c662ebe4ebad0c2bfd8417a5179bdebdf092384885b3700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576776570804460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712576776444458246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff76
58c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b,PodSandboxId:4b7048b6c7973260d96263c2c3c24809e9d3223221b0a2b7df5792b847c105fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576776403440778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712576281566841832,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104333332889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash
: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104268239796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712576102380877439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a805653
84dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712576081183414377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_EXITED,CreatedAt:1712576081137370837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ac9206b-3273-4518-b69f-cad8d352ea0c name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.376155990Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=142867cc-3dd7-4074-bdd1-653d4ac9e404 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.376236272Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=142867cc-3dd7-4074-bdd1-653d4ac9e404 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.377749288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a981361b-3bd8-4732-ba53-62a9ef74db95 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.378164539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576917378140938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a981361b-3bd8-4732-ba53-62a9ef74db95 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.379179926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=026d3ffd-1409-4b3e-938a-84f4b53acf8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.379237788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=026d3ffd-1409-4b3e-938a-84f4b53acf8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.379793368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576836475144538,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f,PodSandboxId:3c1aadf18bdfa1179e1e537fc5d7dd9c19b5d890642da7289bfec4788968d5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712576831454445583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576822457186547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576814450411298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6157d29435bbd140396154bcd9a78ccfc4b40ae9381cc51d1eb08e43026426f9,PodSandboxId:2e231a74d5722d2dc80b47421dcc34ed5768ce11d297b1a89cf4004681bde2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576809741661444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b152d885c64b37aba281501555a42ab3960b7a0e04dd30e16cc149eec7b5fe3,PodSandboxId:6cdc04070cee5ab8a294241ae5279dce3da99ba59c4166837a27cff5205e60d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576791774674276,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70da15df0c8896fec7cbc5ca9d7e2bc4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a,PodSandboxId:627cb3e544fdbc7aed781848ba5949a92ce3b848cd7889d47056d06ecb2f89ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776675758874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada,PodSandboxId:f0bb641e0019f444ca48a09df23de7040077c28253d95ee1d6985fce66f0c49a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576776642725447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712576776719467539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc,PodSandboxId:e111e127bd599208816ee181bf633c85569caf139f7501bf9bd75933af43962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776740240052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712576776645024258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3,PodSandboxId:3f0df28139117d278c662ebe4ebad0c2bfd8417a5179bdebdf092384885b3700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576776570804460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712576776444458246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff76
58c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b,PodSandboxId:4b7048b6c7973260d96263c2c3c24809e9d3223221b0a2b7df5792b847c105fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576776403440778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712576281566841832,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104333332889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash
: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104268239796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712576102380877439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a805653
84dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712576081183414377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_EXITED,CreatedAt:1712576081137370837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=026d3ffd-1409-4b3e-938a-84f4b53acf8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.429661509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f588fb18-7f02-4679-bc30-3b434cc9be9d name=/runtime.v1.RuntimeService/Version
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.429738982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f588fb18-7f02-4679-bc30-3b434cc9be9d name=/runtime.v1.RuntimeService/Version
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.432187807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b4ce75f-cc8c-48c0-8ae7-e130239aa0b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.432845009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576917432804913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b4ce75f-cc8c-48c0-8ae7-e130239aa0b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.433871175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5989023d-1ff9-49b5-940a-e77701809377 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.433949866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5989023d-1ff9-49b5-940a-e77701809377 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.434789014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576836475144538,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f,PodSandboxId:3c1aadf18bdfa1179e1e537fc5d7dd9c19b5d890642da7289bfec4788968d5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712576831454445583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576822457186547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576814450411298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6157d29435bbd140396154bcd9a78ccfc4b40ae9381cc51d1eb08e43026426f9,PodSandboxId:2e231a74d5722d2dc80b47421dcc34ed5768ce11d297b1a89cf4004681bde2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576809741661444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b152d885c64b37aba281501555a42ab3960b7a0e04dd30e16cc149eec7b5fe3,PodSandboxId:6cdc04070cee5ab8a294241ae5279dce3da99ba59c4166837a27cff5205e60d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576791774674276,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70da15df0c8896fec7cbc5ca9d7e2bc4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a,PodSandboxId:627cb3e544fdbc7aed781848ba5949a92ce3b848cd7889d47056d06ecb2f89ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776675758874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada,PodSandboxId:f0bb641e0019f444ca48a09df23de7040077c28253d95ee1d6985fce66f0c49a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576776642725447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712576776719467539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc,PodSandboxId:e111e127bd599208816ee181bf633c85569caf139f7501bf9bd75933af43962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776740240052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712576776645024258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3,PodSandboxId:3f0df28139117d278c662ebe4ebad0c2bfd8417a5179bdebdf092384885b3700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576776570804460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712576776444458246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff76
58c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b,PodSandboxId:4b7048b6c7973260d96263c2c3c24809e9d3223221b0a2b7df5792b847c105fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576776403440778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712576281566841832,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104333332889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash
: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104268239796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712576102380877439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a805653
84dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712576081183414377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_EXITED,CreatedAt:1712576081137370837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5989023d-1ff9-49b5-940a-e77701809377 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.526786272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3378d40a-806b-473e-8f04-a4eb1aeb7f25 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.526857371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3378d40a-806b-473e-8f04-a4eb1aeb7f25 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.527969690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3be0535-a78a-488f-9ca0-e1e437d72276 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.528467136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712576917528438938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3be0535-a78a-488f-9ca0-e1e437d72276 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.529090121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0b45451-9b19-4f63-af82-ae140b4350b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.529152657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0b45451-9b19-4f63-af82-ae140b4350b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:48:37 ha-438604 crio[3968]: time="2024-04-08 11:48:37.529731888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576836475144538,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f,PodSandboxId:3c1aadf18bdfa1179e1e537fc5d7dd9c19b5d890642da7289bfec4788968d5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712576831454445583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576822457186547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576814450411298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6157d29435bbd140396154bcd9a78ccfc4b40ae9381cc51d1eb08e43026426f9,PodSandboxId:2e231a74d5722d2dc80b47421dcc34ed5768ce11d297b1a89cf4004681bde2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576809741661444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b152d885c64b37aba281501555a42ab3960b7a0e04dd30e16cc149eec7b5fe3,PodSandboxId:6cdc04070cee5ab8a294241ae5279dce3da99ba59c4166837a27cff5205e60d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576791774674276,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70da15df0c8896fec7cbc5ca9d7e2bc4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a,PodSandboxId:627cb3e544fdbc7aed781848ba5949a92ce3b848cd7889d47056d06ecb2f89ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776675758874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada,PodSandboxId:f0bb641e0019f444ca48a09df23de7040077c28253d95ee1d6985fce66f0c49a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576776642725447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712576776719467539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc,PodSandboxId:e111e127bd599208816ee181bf633c85569caf139f7501bf9bd75933af43962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776740240052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712576776645024258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3,PodSandboxId:3f0df28139117d278c662ebe4ebad0c2bfd8417a5179bdebdf092384885b3700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576776570804460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712576776444458246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff76
58c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b,PodSandboxId:4b7048b6c7973260d96263c2c3c24809e9d3223221b0a2b7df5792b847c105fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576776403440778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712576281566841832,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104333332889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash
: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104268239796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712576102380877439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a805653
84dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712576081183414377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_EXITED,CreatedAt:1712576081137370837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0b45451-9b19-4f63-af82-ae140b4350b6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ea03841cefbee       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   c76f6fa586a15       kindnet-82krw
	3a82aec0e7a59       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       5                   3c1aadf18bdfa       storage-provisioner
	9b43ce15065cf       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   2                   7bb391b1586eb       kube-controller-manager-ha-438604
	8b2b8a7eb724b       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            3                   afc7832f7d6e5       kube-apiserver-ha-438604
	6157d29435bbd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   2e231a74d5722       busybox-7fdf7869d9-cdh5l
	2b152d885c64b       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   6cdc04070cee5       kube-vip-ha-438604
	68450e531a890       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e111e127bd599       coredns-76f75df574-wqrvc
	8ca11829d6ed9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   c76f6fa586a15       kindnet-82krw
	f3470c370f90d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   627cb3e544fdb       coredns-76f75df574-7gpzq
	9f05730617753       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Exited              kube-controller-manager   1                   7bb391b1586eb       kube-controller-manager-ha-438604
	abb0230fab47a       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      2 minutes ago        Running             kube-scheduler            1                   f0bb641e0019f       kube-scheduler-ha-438604
	ec3dd5d319504       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      2 minutes ago        Running             kube-proxy                1                   3f0df28139117       kube-proxy-v98zm
	c2a76ebe0ee11       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Exited              kube-apiserver            2                   afc7832f7d6e5       kube-apiserver-ha-438604
	81299411c841d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   4b7048b6c7973       etcd-ha-438604
	11b291bd9a246       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   76f708d0734ed       busybox-7fdf7869d9-cdh5l
	f0cafcafceece       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   acf17bfe1f043       coredns-76f75df574-7gpzq
	63c0e178c3e78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   328834ce582ca       coredns-76f75df574-wqrvc
	a0bffd365d14f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      13 minutes ago       Exited              kube-proxy                0                   ffe693490c6c3       kube-proxy-v98zm
	982252ef21b29       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      13 minutes ago       Exited              kube-scheduler            0                   95acb68b16e77       kube-scheduler-ha-438604
	532fccde459b9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   681a212174e36       etcd-ha-438604
	
	
	==> coredns [63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938] <==
	[INFO] 10.244.1.2:54572 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121059s
	[INFO] 10.244.0.4:55733 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248755s
	[INFO] 10.244.0.4:44663 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046479s
	[INFO] 10.244.2.2:43313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161932s
	[INFO] 10.244.2.2:36056 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115719s
	[INFO] 10.244.2.2:58531 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248815s
	[INFO] 10.244.1.2:40849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115353s
	[INFO] 10.244.1.2:51289 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105404s
	[INFO] 10.244.1.2:56814 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070319s
	[INFO] 10.244.0.4:35492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160626s
	[INFO] 10.244.0.4:34374 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082632s
	[INFO] 10.244.2.2:43756 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109569s
	[INFO] 10.244.2.2:45152 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124387s
	[INFO] 10.244.1.2:38830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135636s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1923&timeout=8m32s&timeoutSeconds=512&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1927&timeout=8m17s&timeoutSeconds=497&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1958&timeout=9m3s&timeoutSeconds=543&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc] <==
	Trace[645879881]: [10.581953501s] [10.581953501s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49712->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d] <==
	[INFO] 10.244.2.2:33817 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024307s
	[INFO] 10.244.2.2:53777 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096848s
	[INFO] 10.244.1.2:51257 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001970096s
	[INFO] 10.244.1.2:37927 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164729s
	[INFO] 10.244.1.2:46840 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074025s
	[INFO] 10.244.1.2:40034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116336s
	[INFO] 10.244.1.2:46524 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110431s
	[INFO] 10.244.0.4:47504 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116612s
	[INFO] 10.244.0.4:52704 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105138s
	[INFO] 10.244.2.2:40699 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000199266s
	[INFO] 10.244.1.2:46666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009956s
	[INFO] 10.244.0.4:57492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119263s
	[INFO] 10.244.0.4:45362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139004s
	[INFO] 10.244.2.2:58706 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239864s
	[INFO] 10.244.2.2:32981 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128008s
	[INFO] 10.244.1.2:38182 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167786s
	[INFO] 10.244.1.2:44324 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206004s
	[INFO] 10.244.1.2:37810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013702s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1958&timeout=8m37s&timeoutSeconds=517&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a] <==
	Trace[316290978]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59182->10.96.0.1:443: read: connection reset by peer 10243ms (11:46:38.822)
	Trace[316290978]: [10.243418209s] [10.243418209s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59182->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1347173231]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (08-Apr-2024 11:46:28.378) (total time: 10443ms):
	Trace[1347173231]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59154->10.96.0.1:443: read: connection reset by peer 10443ms (11:46:38.822)
	Trace[1347173231]: [10.443989451s] [10.443989451s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59154->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35288->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35288->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-438604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T11_34_48_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:34:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:48:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:47:00 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:47:00 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:47:00 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:47:00 +0000   Mon, 08 Apr 2024 11:35:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    ha-438604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d242cef9ed484660b2c31aeed7e51ff5
	  System UUID:                d242cef9-ed48-4660-b2c3-1aeed7e51ff5
	  Boot ID:                    336ee057-2212-4601-ad25-56ebfd2bc06e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-cdh5l             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-76f75df574-7gpzq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-76f75df574-wqrvc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-438604                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-82krw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-438604             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-438604    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-v98zm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-438604             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-438604                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-438604 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-438604 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-438604 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-438604 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-438604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-438604 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-438604 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Warning  ContainerGCFailed        2m50s (x2 over 3m50s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           84s                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal   RegisteredNode           83s                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	
	
	Name:               ha-438604-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_36_27_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:36:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:48:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:47:38 +0000   Mon, 08 Apr 2024 11:46:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:47:38 +0000   Mon, 08 Apr 2024 11:46:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:47:38 +0000   Mon, 08 Apr 2024 11:46:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:47:38 +0000   Mon, 08 Apr 2024 11:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    ha-438604-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 957d2c54c49c48d0b297f4467d1bac27
	  System UUID:                957d2c54-c49c-48d0-b297-f4467d1bac27
	  Boot ID:                    7918dd25-1555-4fc9-bbdf-e61f03277376
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jz4h9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-438604-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-b5ztk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-438604-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-438604-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-5vc66                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-438604-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-438604-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  Starting                 81s                  kube-proxy       
	  Normal  Starting                 12m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)    kubelet          Node ha-438604-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)    kubelet          Node ha-438604-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)    kubelet          Node ha-438604-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  NodeReady                12m                  kubelet          Node ha-438604-m02 status is now: NodeReady
	  Normal  RegisteredNode           11m                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  NodeNotReady             8m51s                node-controller  Node ha-438604-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node ha-438604-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node ha-438604-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node ha-438604-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           84s                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  RegisteredNode           83s                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  RegisteredNode           35s                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	
	
	Name:               ha-438604-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_37_39_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:37:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:48:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:48:08 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:48:08 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:48:08 +0000   Mon, 08 Apr 2024 11:37:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:48:08 +0000   Mon, 08 Apr 2024 11:37:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    ha-438604-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27ce6086b0b04606902de8def056d57d
	  System UUID:                27ce6086-b0b0-4606-902d-e8def056d57d
	  Boot ID:                    8ab830e2-477c-4ad9-a3a2-0d7ab6c47517
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-gk5bx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-438604-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-dg6vt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-438604-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-438604-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-pcbq6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-438604-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-438604-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-438604-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-438604-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-438604-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal   RegisteredNode           84s                node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal   RegisteredNode           83s                node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node ha-438604-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node ha-438604-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node ha-438604-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 60s                kubelet          Node ha-438604-m03 has been rebooted, boot id: 8ab830e2-477c-4ad9-a3a2-0d7ab6c47517
	  Normal   RegisteredNode           35s                node-controller  Node ha-438604-m03 event: Registered Node ha-438604-m03 in Controller
	
	
	Name:               ha-438604-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_38_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:38:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:48:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:48:29 +0000   Mon, 08 Apr 2024 11:48:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:48:29 +0000   Mon, 08 Apr 2024 11:48:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:48:29 +0000   Mon, 08 Apr 2024 11:48:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:48:29 +0000   Mon, 08 Apr 2024 11:48:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-438604-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0df153c018eb4bd3bce7e2132da5651e
	  System UUID:                0df153c0-18eb-4bd3-bce7-e2132da5651e
	  Boot ID:                    aa2279bb-1c5b-4505-9645-4e27e21c0101
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8rrcs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-2vmwq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 9m54s              kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-438604-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-438604-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-438604-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m59s              node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   RegisteredNode           9m57s              node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   RegisteredNode           9m56s              node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   NodeReady                9m50s              kubelet          Node ha-438604-m04 status is now: NodeReady
	  Normal   RegisteredNode           84s                node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   RegisteredNode           83s                node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   NodeNotReady             44s                node-controller  Node ha-438604-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-438604-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-438604-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-438604-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-438604-m04 has been rebooted, boot id: aa2279bb-1c5b-4505-9645-4e27e21c0101
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-438604-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +11.215969] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.059868] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060056] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.165820] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.136183] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.312263] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.585051] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.065353] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.627952] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.807124] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.007059] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.588023] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[Apr 8 11:35] kauditd_printk_skb: 15 callbacks suppressed
	[Apr 8 11:36] kauditd_printk_skb: 78 callbacks suppressed
	[Apr 8 11:43] kauditd_printk_skb: 1 callbacks suppressed
	[Apr 8 11:46] systemd-fstab-generator[3886]: Ignoring "noauto" option for root device
	[  +0.094109] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067504] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +0.183172] systemd-fstab-generator[3912]: Ignoring "noauto" option for root device
	[  +0.165677] systemd-fstab-generator[3925]: Ignoring "noauto" option for root device
	[  +0.313219] systemd-fstab-generator[3953]: Ignoring "noauto" option for root device
	[  +1.227917] systemd-fstab-generator[4054]: Ignoring "noauto" option for root device
	[  +3.126783] kauditd_printk_skb: 127 callbacks suppressed
	[ +15.824835] kauditd_printk_skb: 75 callbacks suppressed
	[ +22.788409] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18] <==
	{"level":"info","ts":"2024-04-08T11:44:39.252855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:39.25295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:39.252965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 received MsgPreVoteResp from 3b7a74ffda0d9c54 at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:39.25298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 [logterm: 2, index: 2281] sent MsgPreVote request to 780efa3e7bded717 at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:39.252987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 [logterm: 2, index: 2281] sent MsgPreVote request to 7ff681eaaadd5fcd at term 2"}
	{"level":"warn","ts":"2024-04-08T11:44:39.39443Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.99:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-08T11:44:39.394494Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.99:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-08T11:44:39.394691Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3b7a74ffda0d9c54","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-08T11:44:39.394865Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.394941Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.394996Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395141Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395209Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395324Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395337Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395344Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395353Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395409Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395469Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395652Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395721Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395733Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.40024Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2024-04-08T11:44:39.400365Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2024-04-08T11:44:39.400376Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-438604","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.99:2380"],"advertise-client-urls":["https://192.168.39.99:2379"]}
	
	
	==> etcd [81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b] <==
	{"level":"warn","ts":"2024-04-08T11:47:32.637116Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-04-08T11:47:32.637266Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-04-08T11:47:36.37837Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.94:2380/version","remote-member-id":"780efa3e7bded717","error":"Get \"https://192.168.39.94:2380/version\": dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:36.378613Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"780efa3e7bded717","error":"Get \"https://192.168.39.94:2380/version\": dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:37.637786Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:37.637845Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:40.38075Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.94:2380/version","remote-member-id":"780efa3e7bded717","error":"Get \"https://192.168.39.94:2380/version\": dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:40.380894Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"780efa3e7bded717","error":"Get \"https://192.168.39.94:2380/version\": dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:42.638598Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:42.638887Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:44.383963Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.94:2380/version","remote-member-id":"780efa3e7bded717","error":"Get \"https://192.168.39.94:2380/version\": dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:44.384079Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"780efa3e7bded717","error":"Get \"https://192.168.39.94:2380/version\": dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-08T11:47:46.459485Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:47:46.478501Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3b7a74ffda0d9c54","to":"780efa3e7bded717","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-08T11:47:46.479687Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:47:46.485508Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3b7a74ffda0d9c54","to":"780efa3e7bded717","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-08T11:47:46.485698Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:47:46.500631Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:47:46.509245Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:47:47.486853Z","caller":"traceutil/trace.go:171","msg":"trace[182480743] transaction","detail":"{read_only:false; response_revision:2395; number_of_response:1; }","duration":"147.557307ms","start":"2024-04-08T11:47:47.339272Z","end":"2024-04-08T11:47:47.48683Z","steps":["trace[182480743] 'process raft request'  (duration: 147.47306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:47:47.63997Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:47.640036Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:48:00.851366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.466113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-08T11:48:00.851599Z","caller":"traceutil/trace.go:171","msg":"trace[1280738319] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:2463; }","duration":"104.772328ms","start":"2024-04-08T11:48:00.746788Z","end":"2024-04-08T11:48:00.851561Z","steps":["trace[1280738319] 'count revisions from in-memory index tree'  (duration: 103.320612ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T11:48:01.839393Z","caller":"traceutil/trace.go:171","msg":"trace[814372312] transaction","detail":"{read_only:false; response_revision:2467; number_of_response:1; }","duration":"116.04189ms","start":"2024-04-08T11:48:01.723334Z","end":"2024-04-08T11:48:01.839376Z","steps":["trace[814372312] 'process raft request'  (duration: 115.936035ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:48:38 up 14 min,  0 users,  load average: 0.73, 0.85, 0.50
	Linux ha-438604 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f] <==
	I0408 11:46:17.183354       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0408 11:46:17.183454       1 main.go:107] hostIP = 192.168.39.99
	podIP = 192.168.39.99
	I0408 11:46:17.183709       1 main.go:116] setting mtu 1500 for CNI 
	I0408 11:46:17.183734       1 main.go:146] kindnetd IP family: "ipv4"
	I0408 11:46:17.183759       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0408 11:46:27.383478       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0408 11:46:37.384725       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0408 11:46:39.126332       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0408 11:46:41.127989       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0408 11:46:44.128792       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9] <==
	I0408 11:48:07.551660       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:48:17.568899       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:48:17.568980       1 main.go:227] handling current node
	I0408 11:48:17.568998       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:48:17.569008       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:48:17.569153       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:48:17.569192       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:48:17.569263       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:48:17.569304       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:48:27.580677       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:48:27.580801       1 main.go:227] handling current node
	I0408 11:48:27.580860       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:48:27.580889       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:48:27.581096       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:48:27.581143       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:48:27.581235       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:48:27.581264       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:48:37.597589       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:48:37.600609       1 main.go:227] handling current node
	I0408 11:48:37.600673       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:48:37.600698       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:48:37.600798       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0408 11:48:37.602275       1 main.go:250] Node ha-438604-m03 has CIDR [10.244.2.0/24] 
	I0408 11:48:37.602747       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:48:37.603192       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6] <==
	I0408 11:46:57.408464       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0408 11:46:57.407189       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0408 11:46:57.407222       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0408 11:46:57.407260       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0408 11:46:57.407279       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0408 11:46:57.409717       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0408 11:46:57.484285       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0408 11:46:57.490661       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 11:46:57.505672       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0408 11:46:57.506464       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0408 11:46:57.506706       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0408 11:46:57.507048       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0408 11:46:57.508830       1 shared_informer.go:318] Caches are synced for configmaps
	I0408 11:46:57.509942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0408 11:46:57.510222       1 aggregator.go:165] initial CRD sync complete...
	I0408 11:46:57.510696       1 autoregister_controller.go:141] Starting autoregister controller
	I0408 11:46:57.510735       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 11:46:57.510759       1 cache.go:39] Caches are synced for autoregister controller
	I0408 11:46:57.511677       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	W0408 11:46:57.555365       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94]
	I0408 11:46:57.558745       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 11:46:57.587741       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0408 11:46:57.596637       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0408 11:46:58.423170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0408 11:46:58.954580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94 192.168.39.99]
	
	
	==> kube-apiserver [c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e] <==
	I0408 11:46:17.164042       1 options.go:222] external host was not specified, using 192.168.39.99
	I0408 11:46:17.182498       1 server.go:148] Version: v1.29.3
	I0408 11:46:17.184187       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:46:17.809899       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0408 11:46:17.813885       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0408 11:46:17.813924       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0408 11:46:17.814225       1 instance.go:297] Using reconciler: lease
	W0408 11:46:37.805114       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0408 11:46:37.807624       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0408 11:46:37.817779       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a] <==
	I0408 11:47:14.498968       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0408 11:47:14.503192       1 shared_informer.go:318] Caches are synced for PVC protection
	I0408 11:47:14.521011       1 shared_informer.go:318] Caches are synced for ephemeral
	I0408 11:47:14.523203       1 shared_informer.go:318] Caches are synced for resource quota
	I0408 11:47:14.525343       1 shared_informer.go:318] Caches are synced for resource quota
	I0408 11:47:14.543194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="52.915455ms"
	I0408 11:47:14.543399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="103.449µs"
	I0408 11:47:14.546730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.597808ms"
	I0408 11:47:14.549611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="185.245µs"
	I0408 11:47:14.835804       1 shared_informer.go:318] Caches are synced for garbage collector
	I0408 11:47:14.835899       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0408 11:47:14.859101       1 shared_informer.go:318] Caches are synced for garbage collector
	I0408 11:47:24.044076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="24.546918ms"
	I0408 11:47:24.044242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="87.06µs"
	I0408 11:47:39.083026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.644509ms"
	I0408 11:47:39.084658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="72.15µs"
	I0408 11:47:44.056621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="20.748421ms"
	I0408 11:47:44.057122       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0408 11:47:44.060122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="644.739µs"
	I0408 11:47:54.521060       1 event.go:376] "Event occurred" object="ha-438604-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-438604-m04 status is now: NodeNotReady"
	I0408 11:47:54.550221       1 event.go:376] "Event occurred" object="kube-system/kindnet-8rrcs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 11:47:54.572431       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-2vmwq" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 11:47:58.356429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.198691ms"
	I0408 11:47:58.356598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="101.575µs"
	I0408 11:48:29.102129       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-438604-m04"
	
	
	==> kube-controller-manager [9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38] <==
	I0408 11:46:18.030280       1 serving.go:380] Generated self-signed cert in-memory
	I0408 11:46:18.306917       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0408 11:46:18.306964       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:46:18.308813       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0408 11:46:18.309204       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0408 11:46:18.310134       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0408 11:46:18.310162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0408 11:46:38.827167       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.99:8443/healthz\": dial tcp 192.168.39.99:8443: connect: connection refused"
	
	
	==> kube-proxy [a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119] <==
	E0408 11:43:24.246691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:27.318076       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:27.318204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:27.318081       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:27.318262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:30.390062       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:30.390204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:33.462962       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:33.463096       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:33.463418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:33.463451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:36.535471       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:36.535583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:42.679691       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:42.679762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:45.750337       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:45.750417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:48.822683       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:48.822733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:44:01.110841       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:44:01.110983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:44:01.111805       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:44:01.111935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:44:10.327582       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:44:10.327653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3] <==
	I0408 11:46:18.315345       1 server_others.go:72] "Using iptables proxy"
	E0408 11:46:19.350078       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0408 11:46:22.423096       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0408 11:46:25.494114       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0408 11:46:31.638297       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0408 11:46:40.855172       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0408 11:46:58.432669       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	I0408 11:46:58.517415       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 11:46:58.517502       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 11:46:58.517903       1 server_others.go:168] "Using iptables Proxier"
	I0408 11:46:58.527703       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 11:46:58.532997       1 server.go:865] "Version info" version="v1.29.3"
	I0408 11:46:58.534476       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:46:58.536099       1 config.go:188] "Starting service config controller"
	I0408 11:46:58.536231       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 11:46:58.536435       1 config.go:97] "Starting endpoint slice config controller"
	I0408 11:46:58.536506       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 11:46:58.538905       1 config.go:315] "Starting node config controller"
	I0408 11:46:58.539007       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 11:46:58.637606       1 shared_informer.go:318] Caches are synced for service config
	I0408 11:46:58.639288       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 11:46:58.639987       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a] <==
	E0408 11:44:30.936727       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:31.013765       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 11:44:31.013907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 11:44:31.432981       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 11:44:31.433078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 11:44:35.145634       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 11:44:35.145782       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 11:44:35.243702       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 11:44:35.243822       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:35.579374       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 11:44:35.579501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0408 11:44:35.907693       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 11:44:35.907817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 11:44:36.069933       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 11:44:36.070048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 11:44:36.592786       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 11:44:36.592919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:38.948931       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 11:44:38.948964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 11:44:39.060629       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:44:39.060660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0408 11:44:39.094779       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0408 11:44:39.094927       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0408 11:44:39.095228       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0408 11:44:39.098865       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada] <==
	W0408 11:46:47.089085       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.99:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:47.089204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.99:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:47.373029       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.99:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:47.373157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.99:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:47.677402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.39.99:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:47.677655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.99:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:47.739255       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.99:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:47.739324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.99:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:48.939042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.99:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:48.939128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.99:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:48.988462       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.99:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:48.988508       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.99:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:52.340379       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.99:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:52.340632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.99:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:52.879856       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.99:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:52.880008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.99:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:54.196674       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.99:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:54.196750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.99:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:57.427412       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 11:46:57.429388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 11:46:57.457809       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 11:46:57.457910       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 11:46:57.458073       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 11:46:57.458493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0408 11:46:58.536892       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 11:47:11 ha-438604 kubelet[1376]: I0408 11:47:11.439972    1376 scope.go:117] "RemoveContainer" containerID="9e33b38740b9fc869c0e33947343ff68cb4602e28c71caa1b085888d2d0fa357"
	Apr 08 11:47:13 ha-438604 kubelet[1376]: I0408 11:47:13.334965    1376 scope.go:117] "RemoveContainer" containerID="9e33b38740b9fc869c0e33947343ff68cb4602e28c71caa1b085888d2d0fa357"
	Apr 08 11:47:13 ha-438604 kubelet[1376]: I0408 11:47:13.338613    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:47:13 ha-438604 kubelet[1376]: E0408 11:47:13.342968    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:47:16 ha-438604 kubelet[1376]: I0408 11:47:16.445426    1376 scope.go:117] "RemoveContainer" containerID="8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f"
	Apr 08 11:47:25 ha-438604 kubelet[1376]: I0408 11:47:25.439728    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:47:25 ha-438604 kubelet[1376]: E0408 11:47:25.440318    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:47:40 ha-438604 kubelet[1376]: I0408 11:47:40.441742    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:47:40 ha-438604 kubelet[1376]: E0408 11:47:40.442327    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:47:48 ha-438604 kubelet[1376]: E0408 11:47:48.500910    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:47:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:47:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:47:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:47:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:47:51 ha-438604 kubelet[1376]: I0408 11:47:51.440756    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:47:51 ha-438604 kubelet[1376]: E0408 11:47:51.441811    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:47:59 ha-438604 kubelet[1376]: I0408 11:47:59.441205    1376 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-438604" podUID="e1ddf46e-d497-49ba-97bc-bc23a32be91a"
	Apr 08 11:47:59 ha-438604 kubelet[1376]: I0408 11:47:59.466298    1376 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-438604"
	Apr 08 11:48:04 ha-438604 kubelet[1376]: I0408 11:48:04.441747    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:48:04 ha-438604 kubelet[1376]: E0408 11:48:04.442077    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:48:08 ha-438604 kubelet[1376]: I0408 11:48:08.462739    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-438604" podStartSLOduration=9.46242599 podStartE2EDuration="9.46242599s" podCreationTimestamp="2024-04-08 11:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-08 11:48:08.461597178 +0000 UTC m=+800.273948607" watchObservedRunningTime="2024-04-08 11:48:08.46242599 +0000 UTC m=+800.274777417"
	Apr 08 11:48:19 ha-438604 kubelet[1376]: I0408 11:48:19.440127    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:48:19 ha-438604 kubelet[1376]: E0408 11:48:19.440827    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:48:32 ha-438604 kubelet[1376]: I0408 11:48:32.440326    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:48:32 ha-438604 kubelet[1376]: E0408 11:48:32.440638    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 11:48:36.971359  393472 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18588-368424/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-438604 -n ha-438604
helpers_test.go:261: (dbg) Run:  kubectl --context ha-438604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 stop -v=7 --alsologtostderr
E0408 11:50:07.588974  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 stop -v=7 --alsologtostderr: exit status 82 (2m0.517134397s)

                                                
                                                
-- stdout --
	* Stopping node "ha-438604-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:48:57.381055  393880 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:48:57.381194  393880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:48:57.381204  393880 out.go:304] Setting ErrFile to fd 2...
	I0408 11:48:57.381209  393880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:48:57.381462  393880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:48:57.381745  393880 out.go:298] Setting JSON to false
	I0408 11:48:57.381831  393880 mustload.go:65] Loading cluster: ha-438604
	I0408 11:48:57.382239  393880 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:48:57.382330  393880 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:48:57.382529  393880 mustload.go:65] Loading cluster: ha-438604
	I0408 11:48:57.382658  393880 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:48:57.382682  393880 stop.go:39] StopHost: ha-438604-m04
	I0408 11:48:57.383081  393880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:48:57.383150  393880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:48:57.398904  393880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I0408 11:48:57.399475  393880 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:48:57.400274  393880 main.go:141] libmachine: Using API Version  1
	I0408 11:48:57.400303  393880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:48:57.400774  393880 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:48:57.403724  393880 out.go:177] * Stopping node "ha-438604-m04"  ...
	I0408 11:48:57.405353  393880 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0408 11:48:57.405388  393880 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:48:57.405665  393880 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0408 11:48:57.405693  393880 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:48:57.408802  393880 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:48:57.409332  393880 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:48:20 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:48:57.409365  393880 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:48:57.409540  393880 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:48:57.409733  393880 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:48:57.409935  393880 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:48:57.410084  393880 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	I0408 11:48:57.496220  393880 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0408 11:48:57.552245  393880 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0408 11:48:57.607001  393880 main.go:141] libmachine: Stopping "ha-438604-m04"...
	I0408 11:48:57.607042  393880 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:48:57.608500  393880 main.go:141] libmachine: (ha-438604-m04) Calling .Stop
	I0408 11:48:57.611973  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 0/120
	I0408 11:48:58.614224  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 1/120
	I0408 11:48:59.615653  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 2/120
	I0408 11:49:00.617297  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 3/120
	I0408 11:49:01.618847  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 4/120
	I0408 11:49:02.621021  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 5/120
	I0408 11:49:03.622546  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 6/120
	I0408 11:49:04.624089  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 7/120
	I0408 11:49:05.626522  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 8/120
	I0408 11:49:06.628855  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 9/120
	I0408 11:49:07.630276  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 10/120
	I0408 11:49:08.631801  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 11/120
	I0408 11:49:09.633614  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 12/120
	I0408 11:49:10.635054  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 13/120
	I0408 11:49:11.636570  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 14/120
	I0408 11:49:12.638560  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 15/120
	I0408 11:49:13.640154  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 16/120
	I0408 11:49:14.642312  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 17/120
	I0408 11:49:15.644672  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 18/120
	I0408 11:49:16.647099  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 19/120
	I0408 11:49:17.649289  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 20/120
	I0408 11:49:18.650876  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 21/120
	I0408 11:49:19.652341  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 22/120
	I0408 11:49:20.654114  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 23/120
	I0408 11:49:21.656324  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 24/120
	I0408 11:49:22.658547  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 25/120
	I0408 11:49:23.659925  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 26/120
	I0408 11:49:24.662385  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 27/120
	I0408 11:49:25.663995  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 28/120
	I0408 11:49:26.665920  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 29/120
	I0408 11:49:27.667437  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 30/120
	I0408 11:49:28.669102  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 31/120
	I0408 11:49:29.670866  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 32/120
	I0408 11:49:30.673067  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 33/120
	I0408 11:49:31.674542  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 34/120
	I0408 11:49:32.675944  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 35/120
	I0408 11:49:33.678178  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 36/120
	I0408 11:49:34.679597  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 37/120
	I0408 11:49:35.681100  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 38/120
	I0408 11:49:36.682716  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 39/120
	I0408 11:49:37.684997  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 40/120
	I0408 11:49:38.686749  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 41/120
	I0408 11:49:39.688434  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 42/120
	I0408 11:49:40.690378  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 43/120
	I0408 11:49:41.691785  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 44/120
	I0408 11:49:42.694297  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 45/120
	I0408 11:49:43.696050  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 46/120
	I0408 11:49:44.697634  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 47/120
	I0408 11:49:45.698980  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 48/120
	I0408 11:49:46.700430  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 49/120
	I0408 11:49:47.702672  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 50/120
	I0408 11:49:48.704248  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 51/120
	I0408 11:49:49.706424  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 52/120
	I0408 11:49:50.708209  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 53/120
	I0408 11:49:51.709791  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 54/120
	I0408 11:49:52.711988  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 55/120
	I0408 11:49:53.713475  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 56/120
	I0408 11:49:54.715050  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 57/120
	I0408 11:49:55.716862  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 58/120
	I0408 11:49:56.718718  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 59/120
	I0408 11:49:57.721038  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 60/120
	I0408 11:49:58.722513  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 61/120
	I0408 11:49:59.723771  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 62/120
	I0408 11:50:00.725176  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 63/120
	I0408 11:50:01.727444  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 64/120
	I0408 11:50:02.729350  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 65/120
	I0408 11:50:03.730629  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 66/120
	I0408 11:50:04.732074  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 67/120
	I0408 11:50:05.733526  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 68/120
	I0408 11:50:06.735307  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 69/120
	I0408 11:50:07.736969  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 70/120
	I0408 11:50:08.738965  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 71/120
	I0408 11:50:09.740607  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 72/120
	I0408 11:50:10.742370  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 73/120
	I0408 11:50:11.744725  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 74/120
	I0408 11:50:12.746963  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 75/120
	I0408 11:50:13.748750  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 76/120
	I0408 11:50:14.750342  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 77/120
	I0408 11:50:15.751790  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 78/120
	I0408 11:50:16.753319  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 79/120
	I0408 11:50:17.755136  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 80/120
	I0408 11:50:18.756783  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 81/120
	I0408 11:50:19.758481  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 82/120
	I0408 11:50:20.760789  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 83/120
	I0408 11:50:21.762828  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 84/120
	I0408 11:50:22.765073  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 85/120
	I0408 11:50:23.767652  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 86/120
	I0408 11:50:24.769272  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 87/120
	I0408 11:50:25.770881  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 88/120
	I0408 11:50:26.772580  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 89/120
	I0408 11:50:27.774909  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 90/120
	I0408 11:50:28.776352  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 91/120
	I0408 11:50:29.778450  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 92/120
	I0408 11:50:30.780118  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 93/120
	I0408 11:50:31.781618  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 94/120
	I0408 11:50:32.784149  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 95/120
	I0408 11:50:33.785521  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 96/120
	I0408 11:50:34.787341  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 97/120
	I0408 11:50:35.788866  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 98/120
	I0408 11:50:36.790892  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 99/120
	I0408 11:50:37.793186  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 100/120
	I0408 11:50:38.794681  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 101/120
	I0408 11:50:39.796163  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 102/120
	I0408 11:50:40.797643  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 103/120
	I0408 11:50:41.799939  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 104/120
	I0408 11:50:42.802107  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 105/120
	I0408 11:50:43.803608  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 106/120
	I0408 11:50:44.805077  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 107/120
	I0408 11:50:45.807555  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 108/120
	I0408 11:50:46.808952  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 109/120
	I0408 11:50:47.811003  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 110/120
	I0408 11:50:48.812523  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 111/120
	I0408 11:50:49.814287  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 112/120
	I0408 11:50:50.815706  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 113/120
	I0408 11:50:51.817527  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 114/120
	I0408 11:50:52.819445  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 115/120
	I0408 11:50:53.821448  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 116/120
	I0408 11:50:54.822906  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 117/120
	I0408 11:50:55.825091  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 118/120
	I0408 11:50:56.826543  393880 main.go:141] libmachine: (ha-438604-m04) Waiting for machine to stop 119/120
	I0408 11:50:57.827751  393880 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0408 11:50:57.827844  393880 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0408 11:50:57.829663  393880 out.go:177] 
	W0408 11:50:57.831234  393880 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0408 11:50:57.831256  393880 out.go:239] * 
	* 
	W0408 11:50:57.834866  393880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 11:50:57.836287  393880 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-438604 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr: exit status 3 (19.106225665s)

                                                
                                                
-- stdout --
	ha-438604
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-438604-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:50:57.903085  394318 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:50:57.903224  394318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:50:57.903240  394318 out.go:304] Setting ErrFile to fd 2...
	I0408 11:50:57.903251  394318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:50:57.903515  394318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:50:57.903781  394318 out.go:298] Setting JSON to false
	I0408 11:50:57.903812  394318 mustload.go:65] Loading cluster: ha-438604
	I0408 11:50:57.903950  394318 notify.go:220] Checking for updates...
	I0408 11:50:57.904310  394318 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:50:57.904330  394318 status.go:255] checking status of ha-438604 ...
	I0408 11:50:57.904796  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:57.904902  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:57.924454  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
	I0408 11:50:57.924944  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:57.925760  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:57.925796  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:57.926190  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:57.926488  394318 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:50:57.928301  394318 status.go:330] ha-438604 host status = "Running" (err=<nil>)
	I0408 11:50:57.928323  394318 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:50:57.928651  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:57.928712  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:57.944268  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41889
	I0408 11:50:57.944689  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:57.945166  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:57.945190  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:57.945571  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:57.945738  394318 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:50:57.948327  394318 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:50:57.948844  394318 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:50:57.948866  394318 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:50:57.949088  394318 host.go:66] Checking if "ha-438604" exists ...
	I0408 11:50:57.949385  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:57.949443  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:57.964808  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0408 11:50:57.965368  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:57.965880  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:57.965910  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:57.966207  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:57.966422  394318 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:50:57.966613  394318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:50:57.966655  394318 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:50:57.969639  394318 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:50:57.970062  394318 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:50:57.970092  394318 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:50:57.970295  394318 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:50:57.970505  394318 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:50:57.970684  394318 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:50:57.970845  394318 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:50:58.061992  394318 ssh_runner.go:195] Run: systemctl --version
	I0408 11:50:58.069701  394318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:50:58.089829  394318 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:50:58.089880  394318 api_server.go:166] Checking apiserver status ...
	I0408 11:50:58.089943  394318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:50:58.107875  394318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5145/cgroup
	W0408 11:50:58.119929  394318 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:50:58.120002  394318 ssh_runner.go:195] Run: ls
	I0408 11:50:58.125171  394318 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:50:58.131429  394318 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:50:58.131462  394318 status.go:422] ha-438604 apiserver status = Running (err=<nil>)
	I0408 11:50:58.131473  394318 status.go:257] ha-438604 status: &{Name:ha-438604 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:50:58.131493  394318 status.go:255] checking status of ha-438604-m02 ...
	I0408 11:50:58.131884  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:58.131925  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:58.147337  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41195
	I0408 11:50:58.147804  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:58.148293  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:58.148314  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:58.148662  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:58.148866  394318 main.go:141] libmachine: (ha-438604-m02) Calling .GetState
	I0408 11:50:58.150426  394318 status.go:330] ha-438604-m02 host status = "Running" (err=<nil>)
	I0408 11:50:58.150446  394318 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:50:58.150751  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:58.150788  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:58.166521  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40655
	I0408 11:50:58.167098  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:58.167656  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:58.167707  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:58.168063  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:58.168293  394318 main.go:141] libmachine: (ha-438604-m02) Calling .GetIP
	I0408 11:50:58.171008  394318 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:50:58.171520  394318 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:46:25 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:50:58.171552  394318 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:50:58.171747  394318 host.go:66] Checking if "ha-438604-m02" exists ...
	I0408 11:50:58.172089  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:58.172134  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:58.187041  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45023
	I0408 11:50:58.187468  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:58.188414  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:58.188444  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:58.189058  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:58.189691  394318 main.go:141] libmachine: (ha-438604-m02) Calling .DriverName
	I0408 11:50:58.189906  394318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:50:58.189939  394318 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHHostname
	I0408 11:50:58.192743  394318 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:50:58.193130  394318 main.go:141] libmachine: (ha-438604-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2b:19", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:46:25 +0000 UTC Type:0 Mac:52:54:00:b9:2b:19 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-438604-m02 Clientid:01:52:54:00:b9:2b:19}
	I0408 11:50:58.193164  394318 main.go:141] libmachine: (ha-438604-m02) DBG | domain ha-438604-m02 has defined IP address 192.168.39.219 and MAC address 52:54:00:b9:2b:19 in network mk-ha-438604
	I0408 11:50:58.193307  394318 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHPort
	I0408 11:50:58.193473  394318 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHKeyPath
	I0408 11:50:58.193652  394318 main.go:141] libmachine: (ha-438604-m02) Calling .GetSSHUsername
	I0408 11:50:58.193773  394318 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m02/id_rsa Username:docker}
	I0408 11:50:58.277368  394318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 11:50:58.299650  394318 kubeconfig.go:125] found "ha-438604" server: "https://192.168.39.254:8443"
	I0408 11:50:58.299710  394318 api_server.go:166] Checking apiserver status ...
	I0408 11:50:58.299754  394318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 11:50:58.319137  394318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup
	W0408 11:50:58.331635  394318 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 11:50:58.331729  394318 ssh_runner.go:195] Run: ls
	I0408 11:50:58.337729  394318 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0408 11:50:58.342818  394318 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0408 11:50:58.342850  394318 status.go:422] ha-438604-m02 apiserver status = Running (err=<nil>)
	I0408 11:50:58.342861  394318 status.go:257] ha-438604-m02 status: &{Name:ha-438604-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 11:50:58.342877  394318 status.go:255] checking status of ha-438604-m04 ...
	I0408 11:50:58.343224  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:58.343261  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:58.358504  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0408 11:50:58.358942  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:58.359468  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:58.359493  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:58.359836  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:58.360050  394318 main.go:141] libmachine: (ha-438604-m04) Calling .GetState
	I0408 11:50:58.361690  394318 status.go:330] ha-438604-m04 host status = "Running" (err=<nil>)
	I0408 11:50:58.361709  394318 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:50:58.362103  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:58.362150  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:58.378659  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I0408 11:50:58.379113  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:58.379629  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:58.379655  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:58.380038  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:58.380298  394318 main.go:141] libmachine: (ha-438604-m04) Calling .GetIP
	I0408 11:50:58.383139  394318 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:50:58.383669  394318 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:48:20 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:50:58.383721  394318 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:50:58.383954  394318 host.go:66] Checking if "ha-438604-m04" exists ...
	I0408 11:50:58.384315  394318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:50:58.384376  394318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:50:58.400882  394318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I0408 11:50:58.401364  394318 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:50:58.401854  394318 main.go:141] libmachine: Using API Version  1
	I0408 11:50:58.401878  394318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:50:58.402242  394318 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:50:58.402437  394318 main.go:141] libmachine: (ha-438604-m04) Calling .DriverName
	I0408 11:50:58.402636  394318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 11:50:58.402661  394318 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHHostname
	I0408 11:50:58.405623  394318 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:50:58.406072  394318 main.go:141] libmachine: (ha-438604-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:d2:d8", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:48:20 +0000 UTC Type:0 Mac:52:54:00:f3:d2:d8 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-438604-m04 Clientid:01:52:54:00:f3:d2:d8}
	I0408 11:50:58.406101  394318 main.go:141] libmachine: (ha-438604-m04) DBG | domain ha-438604-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:f3:d2:d8 in network mk-ha-438604
	I0408 11:50:58.406262  394318 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHPort
	I0408 11:50:58.406439  394318 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHKeyPath
	I0408 11:50:58.406603  394318 main.go:141] libmachine: (ha-438604-m04) Calling .GetSSHUsername
	I0408 11:50:58.406720  394318 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604-m04/id_rsa Username:docker}
	W0408 11:51:16.943952  394318 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.121:22: connect: no route to host
	W0408 11:51:16.944113  394318 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0408 11:51:16.944136  394318 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	I0408 11:51:16.944144  394318 status.go:257] ha-438604-m04 status: &{Name:ha-438604-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0408 11:51:16.944168  394318 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-438604 -n ha-438604
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-438604 logs -n 25: (2.002486583s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-438604 ssh -n ha-438604-m02 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04:/home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m04 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp testdata/cp-test.txt                                                | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604:/home/docker/cp-test_ha-438604-m04_ha-438604.txt                       |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604 sudo cat                                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604.txt                                 |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m02:/home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m02 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m03:/home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n                                                                 | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | ha-438604-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-438604 ssh -n ha-438604-m03 sudo cat                                          | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC | 08 Apr 24 11:39 UTC |
	|         | /home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-438604 node stop m02 -v=7                                                     | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-438604 node start m02 -v=7                                                    | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-438604 -v=7                                                           | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-438604 -v=7                                                                | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-438604 --wait=true -v=7                                                    | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:44 UTC | 08 Apr 24 11:48 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-438604                                                                | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:48 UTC |                     |
	| node    | ha-438604 node delete m03 -v=7                                                   | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:48 UTC | 08 Apr 24 11:48 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-438604 stop -v=7                                                              | ha-438604 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:44:38
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:44:38.214855  392192 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:44:38.215008  392192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:44:38.215021  392192 out.go:304] Setting ErrFile to fd 2...
	I0408 11:44:38.215027  392192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:44:38.215260  392192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:44:38.215899  392192 out.go:298] Setting JSON to false
	I0408 11:44:38.216957  392192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5221,"bootTime":1712571457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:44:38.217033  392192 start.go:139] virtualization: kvm guest
	I0408 11:44:38.219847  392192 out.go:177] * [ha-438604] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:44:38.221482  392192 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:44:38.221490  392192 notify.go:220] Checking for updates...
	I0408 11:44:38.222949  392192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:44:38.224423  392192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:44:38.226080  392192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:44:38.227515  392192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:44:38.228960  392192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:44:38.230638  392192 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:44:38.230748  392192 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:44:38.231259  392192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:44:38.231307  392192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:44:38.246610  392192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0408 11:44:38.247238  392192 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:44:38.247952  392192 main.go:141] libmachine: Using API Version  1
	I0408 11:44:38.247979  392192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:44:38.248485  392192 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:44:38.248721  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:44:38.285043  392192 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 11:44:38.286359  392192 start.go:297] selected driver: kvm2
	I0408 11:44:38.286378  392192 start.go:901] validating driver "kvm2" against &{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:44:38.286557  392192 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:44:38.286882  392192 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:44:38.286950  392192 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:44:38.302294  392192 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:44:38.302973  392192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 11:44:38.303057  392192 cni.go:84] Creating CNI manager for ""
	I0408 11:44:38.303069  392192 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0408 11:44:38.303122  392192 start.go:340] cluster config:
	{Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:44:38.303321  392192 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:44:38.305972  392192 out.go:177] * Starting "ha-438604" primary control-plane node in "ha-438604" cluster
	I0408 11:44:38.307458  392192 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:44:38.307508  392192 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 11:44:38.307522  392192 cache.go:56] Caching tarball of preloaded images
	I0408 11:44:38.307610  392192 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 11:44:38.307621  392192 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 11:44:38.307782  392192 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/config.json ...
	I0408 11:44:38.308008  392192 start.go:360] acquireMachinesLock for ha-438604: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 11:44:38.308056  392192 start.go:364] duration metric: took 27.504µs to acquireMachinesLock for "ha-438604"
	I0408 11:44:38.308073  392192 start.go:96] Skipping create...Using existing machine configuration
	I0408 11:44:38.308086  392192 fix.go:54] fixHost starting: 
	I0408 11:44:38.308335  392192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:44:38.308369  392192 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:44:38.323450  392192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0408 11:44:38.324036  392192 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:44:38.324585  392192 main.go:141] libmachine: Using API Version  1
	I0408 11:44:38.324611  392192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:44:38.324998  392192 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:44:38.325235  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:44:38.325415  392192 main.go:141] libmachine: (ha-438604) Calling .GetState
	I0408 11:44:38.327244  392192 fix.go:112] recreateIfNeeded on ha-438604: state=Running err=<nil>
	W0408 11:44:38.327262  392192 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 11:44:38.329239  392192 out.go:177] * Updating the running kvm2 "ha-438604" VM ...
	I0408 11:44:38.330300  392192 machine.go:94] provisionDockerMachine start ...
	I0408 11:44:38.330324  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:44:38.330586  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.333281  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.333750  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.333779  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.333968  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.334145  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.334296  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.334434  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.334616  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:44:38.334807  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:44:38.334823  392192 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 11:44:38.453369  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604
	
	I0408 11:44:38.453398  392192 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:44:38.453650  392192 buildroot.go:166] provisioning hostname "ha-438604"
	I0408 11:44:38.453677  392192 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:44:38.453906  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.456531  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.457064  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.457092  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.457297  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.457510  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.457694  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.457883  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.458081  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:44:38.458307  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:44:38.458323  392192 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-438604 && echo "ha-438604" | sudo tee /etc/hostname
	I0408 11:44:38.594814  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-438604
	
	I0408 11:44:38.594860  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.597842  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.598302  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.598338  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.598574  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.598822  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.599019  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.599145  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.599356  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:44:38.599534  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:44:38.599550  392192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-438604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-438604/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-438604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 11:44:38.717190  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 11:44:38.717228  392192 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 11:44:38.717260  392192 buildroot.go:174] setting up certificates
	I0408 11:44:38.717273  392192 provision.go:84] configureAuth start
	I0408 11:44:38.717283  392192 main.go:141] libmachine: (ha-438604) Calling .GetMachineName
	I0408 11:44:38.717630  392192 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:44:38.720517  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.720984  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.721006  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.721223  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.723670  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.724073  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.724116  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.724273  392192 provision.go:143] copyHostCerts
	I0408 11:44:38.724305  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:44:38.724371  392192 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 11:44:38.724393  392192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 11:44:38.724481  392192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 11:44:38.724578  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:44:38.724601  392192 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 11:44:38.724608  392192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 11:44:38.724632  392192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 11:44:38.724736  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:44:38.724755  392192 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 11:44:38.724759  392192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 11:44:38.724784  392192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 11:44:38.724847  392192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.ha-438604 san=[127.0.0.1 192.168.39.99 ha-438604 localhost minikube]
	I0408 11:44:38.780631  392192 provision.go:177] copyRemoteCerts
	I0408 11:44:38.780706  392192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 11:44:38.780734  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.783636  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.784072  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.784105  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.784282  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.784502  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.784680  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.784802  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:44:38.877161  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 11:44:38.877228  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 11:44:38.905740  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 11:44:38.905829  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0408 11:44:38.935152  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 11:44:38.935264  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 11:44:38.965534  392192 provision.go:87] duration metric: took 248.246129ms to configureAuth
	I0408 11:44:38.965568  392192 buildroot.go:189] setting minikube options for container-runtime
	I0408 11:44:38.965835  392192 config.go:182] Loaded profile config "ha-438604": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:44:38.965947  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:44:38.968421  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.968802  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:44:38.968828  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:44:38.969016  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:44:38.969222  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.969429  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:44:38.969596  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:44:38.969779  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:44:38.969999  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:44:38.970030  392192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 11:46:09.944554  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 11:46:09.944588  392192 machine.go:97] duration metric: took 1m31.61427192s to provisionDockerMachine
	I0408 11:46:09.944607  392192 start.go:293] postStartSetup for "ha-438604" (driver="kvm2")
	I0408 11:46:09.944621  392192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 11:46:09.944664  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:09.945026  392192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 11:46:09.945070  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:09.948349  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:09.948771  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:09.948796  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:09.948935  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:09.949149  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:09.949307  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:09.949427  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:46:10.041213  392192 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 11:46:10.046167  392192 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 11:46:10.046199  392192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 11:46:10.046288  392192 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 11:46:10.046410  392192 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 11:46:10.046430  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 11:46:10.046531  392192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 11:46:10.057545  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:46:10.085298  392192 start.go:296] duration metric: took 140.652468ms for postStartSetup
	I0408 11:46:10.085390  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.085763  392192 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0408 11:46:10.085795  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:10.088447  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.088973  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.089008  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.089240  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:10.089476  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.089652  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:10.089847  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	W0408 11:46:10.178949  392192 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0408 11:46:10.179000  392192 fix.go:56] duration metric: took 1m31.870921572s for fixHost
	I0408 11:46:10.179033  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:10.181878  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.182292  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.182324  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.182507  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:10.182686  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.182868  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.182983  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:10.183158  392192 main.go:141] libmachine: Using SSH client type: native
	I0408 11:46:10.183344  392192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0408 11:46:10.183355  392192 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 11:46:10.296963  392192 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712576770.263978103
	
	I0408 11:46:10.296995  392192 fix.go:216] guest clock: 1712576770.263978103
	I0408 11:46:10.297010  392192 fix.go:229] Guest: 2024-04-08 11:46:10.263978103 +0000 UTC Remote: 2024-04-08 11:46:10.179010332 +0000 UTC m=+92.016999693 (delta=84.967771ms)
	I0408 11:46:10.297053  392192 fix.go:200] guest clock delta is within tolerance: 84.967771ms
	I0408 11:46:10.297066  392192 start.go:83] releasing machines lock for "ha-438604", held for 1m31.98899047s
	I0408 11:46:10.297092  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.297414  392192 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:46:10.300024  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.300505  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.300546  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.300689  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.301246  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.301425  392192 main.go:141] libmachine: (ha-438604) Calling .DriverName
	I0408 11:46:10.301523  392192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 11:46:10.301574  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:10.301701  392192 ssh_runner.go:195] Run: cat /version.json
	I0408 11:46:10.301726  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHHostname
	I0408 11:46:10.304350  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.304564  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.304780  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.304809  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.304959  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:10.305109  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:10.305150  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.305195  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:10.305278  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHPort
	I0408 11:46:10.305340  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:10.305471  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:46:10.305486  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHKeyPath
	I0408 11:46:10.305708  392192 main.go:141] libmachine: (ha-438604) Calling .GetSSHUsername
	I0408 11:46:10.305957  392192 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/ha-438604/id_rsa Username:docker}
	I0408 11:46:10.423598  392192 ssh_runner.go:195] Run: systemctl --version
	I0408 11:46:10.430911  392192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 11:46:10.604410  392192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 11:46:10.610820  392192 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 11:46:10.610912  392192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 11:46:10.621291  392192 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 11:46:10.621322  392192 start.go:494] detecting cgroup driver to use...
	I0408 11:46:10.621396  392192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 11:46:10.640597  392192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 11:46:10.657011  392192 docker.go:217] disabling cri-docker service (if available) ...
	I0408 11:46:10.657097  392192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 11:46:10.673084  392192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 11:46:10.687745  392192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 11:46:10.845264  392192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 11:46:11.003546  392192 docker.go:233] disabling docker service ...
	I0408 11:46:11.003654  392192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 11:46:11.021251  392192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 11:46:11.036833  392192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 11:46:11.189970  392192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 11:46:11.352603  392192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 11:46:11.368881  392192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 11:46:11.390445  392192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 11:46:11.390508  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.402424  392192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 11:46:11.402499  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.413878  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.425475  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.436838  392192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 11:46:11.448452  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.459732  392192 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.473134  392192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 11:46:11.485040  392192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 11:46:11.495447  392192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 11:46:11.506014  392192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:46:11.662136  392192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 11:46:12.363821  392192 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 11:46:12.363905  392192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 11:46:12.369549  392192 start.go:562] Will wait 60s for crictl version
	I0408 11:46:12.369613  392192 ssh_runner.go:195] Run: which crictl
	I0408 11:46:12.374037  392192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 11:46:12.416237  392192 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 11:46:12.416329  392192 ssh_runner.go:195] Run: crio --version
	I0408 11:46:12.447388  392192 ssh_runner.go:195] Run: crio --version
	I0408 11:46:12.481174  392192 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 11:46:12.482709  392192 main.go:141] libmachine: (ha-438604) Calling .GetIP
	I0408 11:46:12.486007  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:12.486502  392192 main.go:141] libmachine: (ha-438604) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:8e:55", ip: ""} in network mk-ha-438604: {Iface:virbr1 ExpiryTime:2024-04-08 12:34:17 +0000 UTC Type:0 Mac:52:54:00:cc:8e:55 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-438604 Clientid:01:52:54:00:cc:8e:55}
	I0408 11:46:12.486533  392192 main.go:141] libmachine: (ha-438604) DBG | domain ha-438604 has defined IP address 192.168.39.99 and MAC address 52:54:00:cc:8e:55 in network mk-ha-438604
	I0408 11:46:12.486697  392192 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 11:46:12.492148  392192 kubeadm.go:877] updating cluster {Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 11:46:12.492295  392192 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:46:12.492335  392192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:46:12.537662  392192 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 11:46:12.537690  392192 crio.go:433] Images already preloaded, skipping extraction
	I0408 11:46:12.537785  392192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 11:46:12.578052  392192 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 11:46:12.578078  392192 cache_images.go:84] Images are preloaded, skipping loading
	I0408 11:46:12.578089  392192 kubeadm.go:928] updating node { 192.168.39.99 8443 v1.29.3 crio true true} ...
	I0408 11:46:12.578291  392192 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-438604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 11:46:12.578369  392192 ssh_runner.go:195] Run: crio config
	I0408 11:46:12.633193  392192 cni.go:84] Creating CNI manager for ""
	I0408 11:46:12.633220  392192 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0408 11:46:12.633239  392192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 11:46:12.633268  392192 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-438604 NodeName:ha-438604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 11:46:12.633448  392192 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-438604"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 11:46:12.633476  392192 kube-vip.go:111] generating kube-vip config ...
	I0408 11:46:12.633539  392192 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 11:46:12.646199  392192 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0408 11:46:12.646352  392192 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 11:46:12.646413  392192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 11:46:12.657155  392192 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 11:46:12.657223  392192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0408 11:46:12.667553  392192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0408 11:46:12.685786  392192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 11:46:12.703921  392192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0408 11:46:12.722577  392192 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0408 11:46:12.739893  392192 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0408 11:46:12.745408  392192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 11:46:12.893401  392192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 11:46:12.909203  392192 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604 for IP: 192.168.39.99
	I0408 11:46:12.909232  392192 certs.go:194] generating shared ca certs ...
	I0408 11:46:12.909254  392192 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:46:12.909456  392192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 11:46:12.909518  392192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 11:46:12.909533  392192 certs.go:256] generating profile certs ...
	I0408 11:46:12.909632  392192 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/client.key
	I0408 11:46:12.909667  392192 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.45572550
	I0408 11:46:12.909691  392192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.45572550 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.99 192.168.39.219 192.168.39.94 192.168.39.254]
	I0408 11:46:13.107246  392192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.45572550 ...
	I0408 11:46:13.107293  392192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.45572550: {Name:mk2a66efb8e7b70b4d7242b919efe4d10dc76679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:46:13.107507  392192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.45572550 ...
	I0408 11:46:13.107527  392192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.45572550: {Name:mk61497c5d18ad8679ebd14d2f81bc2ffe59a139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 11:46:13.107630  392192 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt.45572550 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt
	I0408 11:46:13.107840  392192 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key.45572550 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key
	I0408 11:46:13.108027  392192 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key
	I0408 11:46:13.108057  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 11:46:13.108079  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 11:46:13.108100  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 11:46:13.108116  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 11:46:13.108135  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 11:46:13.108153  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 11:46:13.108173  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 11:46:13.108190  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 11:46:13.108255  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 11:46:13.108296  392192 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 11:46:13.108312  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 11:46:13.108348  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 11:46:13.108477  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 11:46:13.108511  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 11:46:13.108583  392192 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 11:46:13.108621  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:46:13.108641  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 11:46:13.108663  392192 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 11:46:13.109465  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 11:46:13.159383  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 11:46:13.200310  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 11:46:13.234594  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 11:46:13.260171  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0408 11:46:13.296092  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 11:46:13.323008  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 11:46:13.351189  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/ha-438604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 11:46:13.378797  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 11:46:13.406048  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 11:46:13.432550  392192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 11:46:13.459903  392192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 11:46:13.478008  392192 ssh_runner.go:195] Run: openssl version
	I0408 11:46:13.484587  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 11:46:13.496485  392192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:46:13.501616  392192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:46:13.501694  392192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 11:46:13.507892  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 11:46:13.518325  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 11:46:13.530903  392192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 11:46:13.536020  392192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 11:46:13.536104  392192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 11:46:13.542665  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 11:46:13.554153  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 11:46:13.566387  392192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 11:46:13.571572  392192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 11:46:13.571643  392192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 11:46:13.577934  392192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 11:46:13.588432  392192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 11:46:13.593528  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 11:46:13.600017  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 11:46:13.606439  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 11:46:13.612594  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 11:46:13.618678  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 11:46:13.624800  392192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 11:46:13.630877  392192 kubeadm.go:391] StartCluster: {Name:ha-438604 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-438604 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.219 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.121 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:46:13.631009  392192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 11:46:13.631068  392192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 11:46:13.673709  392192 cri.go:89] found id: "802aef6168e17b113cf37d1afff2ef056b19deb40792ecd9788195491f5badfa"
	I0408 11:46:13.673747  392192 cri.go:89] found id: "9e33b38740b9fc869c0e33947343ff68cb4602e28c71caa1b085888d2d0fa357"
	I0408 11:46:13.673753  392192 cri.go:89] found id: "af9e53f4b07b92152c87ad1055792e15e0ec552ccf4a0f9e8e52f169a1bfac1c"
	I0408 11:46:13.673758  392192 cri.go:89] found id: "7d36288c5678cf4e1e7f1ce02aa857b97f86081eea3b13e20bf674fd4833024b"
	I0408 11:46:13.673761  392192 cri.go:89] found id: "f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d"
	I0408 11:46:13.673766  392192 cri.go:89] found id: "63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938"
	I0408 11:46:13.673770  392192 cri.go:89] found id: "557462b300c32a57592a6dbfb63a59b8a9e967b32e2cd98811982e43c7f1f654"
	I0408 11:46:13.673774  392192 cri.go:89] found id: "a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119"
	I0408 11:46:13.673778  392192 cri.go:89] found id: "b2d05e909b1dd62c7d22df09e2879ee245da04002b5b289aa59937ebedea1861"
	I0408 11:46:13.673786  392192 cri.go:89] found id: "677d8d8c878cc970818a22eecb2a4594f198c2b04e0d3c0f769a2ed0e1529842"
	I0408 11:46:13.673794  392192 cri.go:89] found id: "982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a"
	I0408 11:46:13.673798  392192 cri.go:89] found id: "532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18"
	I0408 11:46:13.673803  392192 cri.go:89] found id: "3f52ec6258fa2e5a1213507a0a5ddc40b5df31d013abb50f9e7f437faa224bef"
	I0408 11:46:13.673806  392192 cri.go:89] found id: ""
	I0408 11:46:13.673848  392192 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.703271000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712577077703249918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3461b7ca-a3c0-40c2-82a3-4f419060ba3b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.704359790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a24dc4c-a2db-4cde-940c-378ee3787564 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.704444727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a24dc4c-a2db-4cde-940c-378ee3787564 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.705054015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576836475144538,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f,PodSandboxId:3c1aadf18bdfa1179e1e537fc5d7dd9c19b5d890642da7289bfec4788968d5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712576831454445583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576822457186547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576814450411298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6157d29435bbd140396154bcd9a78ccfc4b40ae9381cc51d1eb08e43026426f9,PodSandboxId:2e231a74d5722d2dc80b47421dcc34ed5768ce11d297b1a89cf4004681bde2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576809741661444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b152d885c64b37aba281501555a42ab3960b7a0e04dd30e16cc149eec7b5fe3,PodSandboxId:6cdc04070cee5ab8a294241ae5279dce3da99ba59c4166837a27cff5205e60d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576791774674276,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70da15df0c8896fec7cbc5ca9d7e2bc4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a,PodSandboxId:627cb3e544fdbc7aed781848ba5949a92ce3b848cd7889d47056d06ecb2f89ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776675758874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada,PodSandboxId:f0bb641e0019f444ca48a09df23de7040077c28253d95ee1d6985fce66f0c49a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576776642725447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712576776719467539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc,PodSandboxId:e111e127bd599208816ee181bf633c85569caf139f7501bf9bd75933af43962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776740240052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712576776645024258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3,PodSandboxId:3f0df28139117d278c662ebe4ebad0c2bfd8417a5179bdebdf092384885b3700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576776570804460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712576776444458246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff76
58c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b,PodSandboxId:4b7048b6c7973260d96263c2c3c24809e9d3223221b0a2b7df5792b847c105fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576776403440778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712576281566841832,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104333332889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash
: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104268239796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712576102380877439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a805653
84dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712576081183414377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_EXITED,CreatedAt:1712576081137370837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a24dc4c-a2db-4cde-940c-378ee3787564 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.752510360Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c97f8d21-d36a-4e76-9464-31f600b1a11c name=/runtime.v1.RuntimeService/Version
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.752658478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c97f8d21-d36a-4e76-9464-31f600b1a11c name=/runtime.v1.RuntimeService/Version
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.753694541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee1ae120-31d6-48ff-bfed-8437b2947bf8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.754118319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712577077754095768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee1ae120-31d6-48ff-bfed-8437b2947bf8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.754786331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4700c2d-d8b2-466e-bd7b-f46d4e9e43bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.754844054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4700c2d-d8b2-466e-bd7b-f46d4e9e43bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.755595325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576836475144538,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f,PodSandboxId:3c1aadf18bdfa1179e1e537fc5d7dd9c19b5d890642da7289bfec4788968d5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712576831454445583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576822457186547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576814450411298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6157d29435bbd140396154bcd9a78ccfc4b40ae9381cc51d1eb08e43026426f9,PodSandboxId:2e231a74d5722d2dc80b47421dcc34ed5768ce11d297b1a89cf4004681bde2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576809741661444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b152d885c64b37aba281501555a42ab3960b7a0e04dd30e16cc149eec7b5fe3,PodSandboxId:6cdc04070cee5ab8a294241ae5279dce3da99ba59c4166837a27cff5205e60d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576791774674276,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70da15df0c8896fec7cbc5ca9d7e2bc4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a,PodSandboxId:627cb3e544fdbc7aed781848ba5949a92ce3b848cd7889d47056d06ecb2f89ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776675758874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada,PodSandboxId:f0bb641e0019f444ca48a09df23de7040077c28253d95ee1d6985fce66f0c49a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576776642725447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712576776719467539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc,PodSandboxId:e111e127bd599208816ee181bf633c85569caf139f7501bf9bd75933af43962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776740240052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712576776645024258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3,PodSandboxId:3f0df28139117d278c662ebe4ebad0c2bfd8417a5179bdebdf092384885b3700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576776570804460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712576776444458246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff76
58c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b,PodSandboxId:4b7048b6c7973260d96263c2c3c24809e9d3223221b0a2b7df5792b847c105fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576776403440778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712576281566841832,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104333332889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash
: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104268239796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712576102380877439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a805653
84dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712576081183414377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_EXITED,CreatedAt:1712576081137370837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4700c2d-d8b2-466e-bd7b-f46d4e9e43bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.809885612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2bacaa79-280b-4808-8e43-73a39444aed9 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.809986378Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2bacaa79-280b-4808-8e43-73a39444aed9 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.811109522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f551c361-7614-4b55-9633-853089a5041b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.811632629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712577077811607464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f551c361-7614-4b55-9633-853089a5041b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.812194240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4ee821c-2808-4cac-bbfb-76f599bd01ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.812250095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4ee821c-2808-4cac-bbfb-76f599bd01ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.812692191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576836475144538,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f,PodSandboxId:3c1aadf18bdfa1179e1e537fc5d7dd9c19b5d890642da7289bfec4788968d5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712576831454445583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576822457186547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576814450411298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6157d29435bbd140396154bcd9a78ccfc4b40ae9381cc51d1eb08e43026426f9,PodSandboxId:2e231a74d5722d2dc80b47421dcc34ed5768ce11d297b1a89cf4004681bde2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576809741661444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b152d885c64b37aba281501555a42ab3960b7a0e04dd30e16cc149eec7b5fe3,PodSandboxId:6cdc04070cee5ab8a294241ae5279dce3da99ba59c4166837a27cff5205e60d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576791774674276,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70da15df0c8896fec7cbc5ca9d7e2bc4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a,PodSandboxId:627cb3e544fdbc7aed781848ba5949a92ce3b848cd7889d47056d06ecb2f89ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776675758874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada,PodSandboxId:f0bb641e0019f444ca48a09df23de7040077c28253d95ee1d6985fce66f0c49a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576776642725447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712576776719467539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc,PodSandboxId:e111e127bd599208816ee181bf633c85569caf139f7501bf9bd75933af43962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776740240052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712576776645024258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3,PodSandboxId:3f0df28139117d278c662ebe4ebad0c2bfd8417a5179bdebdf092384885b3700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576776570804460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712576776444458246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff76
58c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b,PodSandboxId:4b7048b6c7973260d96263c2c3c24809e9d3223221b0a2b7df5792b847c105fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576776403440778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712576281566841832,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104333332889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash
: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104268239796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712576102380877439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a805653
84dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712576081183414377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_EXITED,CreatedAt:1712576081137370837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4ee821c-2808-4cac-bbfb-76f599bd01ef name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.872471107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=058523b2-e94b-4430-94a6-c14ddf185180 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.872671994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=058523b2-e94b-4430-94a6-c14ddf185180 name=/runtime.v1.RuntimeService/Version
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.874361978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35e289c2-3a23-4c22-8ffe-9dd679468c6e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.875315048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712577077875281164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35e289c2-3a23-4c22-8ffe-9dd679468c6e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.876222785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dc5704a-fe5a-418e-9cca-4fe89bec7b97 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.876302368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dc5704a-fe5a-418e-9cca-4fe89bec7b97 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 11:51:17 ha-438604 crio[3968]: time="2024-04-08 11:51:17.876925048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712576836475144538,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f,PodSandboxId:3c1aadf18bdfa1179e1e537fc5d7dd9c19b5d890642da7289bfec4788968d5cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712576831454445583,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46a902f5-0192-4a86-bfe4-4b4d663402c1,},Annotations:map[string]string{io.kubernetes.container.hash: b8e3cf3a,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712576822457186547,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712576814450411298,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff7658c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6157d29435bbd140396154bcd9a78ccfc4b40ae9381cc51d1eb08e43026426f9,PodSandboxId:2e231a74d5722d2dc80b47421dcc34ed5768ce11d297b1a89cf4004681bde2b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712576809741661444,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]string{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b152d885c64b37aba281501555a42ab3960b7a0e04dd30e16cc149eec7b5fe3,PodSandboxId:6cdc04070cee5ab8a294241ae5279dce3da99ba59c4166837a27cff5205e60d5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1712576791774674276,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70da15df0c8896fec7cbc5ca9d7e2bc4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a,PodSandboxId:627cb3e544fdbc7aed781848ba5949a92ce3b848cd7889d47056d06ecb2f89ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776675758874,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TC
P\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada,PodSandboxId:f0bb641e0019f444ca48a09df23de7040077c28253d95ee1d6985fce66f0c49a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712576776642725447,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.c
ontainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f,PodSandboxId:c76f6fa586a15085d496d6036d63a2c00a513a4c388fd0f153e8712913399fa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712576776719467539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-82krw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d313f06-523c-446e-a047-640980b34c0e,},Annotations:map[string]string{io.kubernetes.container.hash: e0e354c8,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc,PodSandboxId:e111e127bd599208816ee181bf633c85569caf139f7501bf9bd75933af43962a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712576776740240052,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38,PodSandboxId:7bb391b1586eba721eb833b9123a8be89457139fde0a953684a18f5d373fe02c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712576776645024258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-438604,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 53fbb3716d696c55355c71f791e17add,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3,PodSandboxId:3f0df28139117d278c662ebe4ebad0c2bfd8417a5179bdebdf092384885b3700,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712576776570804460,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e,PodSandboxId:afc7832f7d6e5795179a4ee702d5f4487639c7ad56a59030191210acbb085d7e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712576776444458246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93b1bff76
58c8575814a65b8cc79f815,},Annotations:map[string]string{io.kubernetes.container.hash: e3c356bc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b,PodSandboxId:4b7048b6c7973260d96263c2c3c24809e9d3223221b0a2b7df5792b847c105fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712576776403440778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11b291bd9a24651c51906d8397bfeafcd7489252e68d796258b30a36434df34f,PodSandboxId:76f708d0734ed74f6592ab020c313ba6715d88cdfc0cbda783645a715b385361,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712576281566841832,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-cdh5l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a83c06a6-d809-4c17-a406-3f1d4b9cfaf7,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa30c260,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d,PodSandboxId:acf17bfe1f0434ae5ba727a8fee4a9479a05c8fbedc8a3ced97bcfdd892c7c79,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104333332889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7gpzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 761f75a2-9b79-4c09-b91d-5b031e0688d4,},Annotations:map[string]string{io.kubernetes.container.hash
: 3a9ff7f0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938,PodSandboxId:328834ce582caa609c73b80ce01d10a8307c33985839cf623b33e35bd1286388,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712576104268239796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.po
d.name: coredns-76f75df574-wqrvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39dd6d41-947e-4b4f-85a8-99fd88d1f4d0,},Annotations:map[string]string{io.kubernetes.container.hash: fb4f7b75,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119,PodSandboxId:ffe693490c6c3c0faebb1ef776298586b9073174df002f1aa99c6c606750c1b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:
a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712576102380877439,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v98zm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b430193d-b6ab-442c-8ff3-12a8f2c144b9,},Annotations:map[string]string{io.kubernetes.container.hash: d182f2a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a,PodSandboxId:95acb68b16e77b599f4e3077ba4e6fcf3fcc2fc20e4a3d0405b27550747206dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a805653
84dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712576081183414377,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffa293b638ee951e85e4b24371ebee4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18,PodSandboxId:681a212174e369dee5e13c78b1da8fedbf96bd850356966d05517f730e147824,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_EXITED,CreatedAt:1712576081137370837,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-438604,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a9a54e2599021aa0464f428f90149d,},Annotations:map[string]string{io.kubernetes.container.hash: 5f8acb73,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dc5704a-fe5a-418e-9cca-4fe89bec7b97 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea03841cefbee       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   c76f6fa586a15       kindnet-82krw
	3a82aec0e7a59       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       5                   3c1aadf18bdfa       storage-provisioner
	9b43ce15065cf       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Running             kube-controller-manager   2                   7bb391b1586eb       kube-controller-manager-ha-438604
	8b2b8a7eb724b       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Running             kube-apiserver            3                   afc7832f7d6e5       kube-apiserver-ha-438604
	6157d29435bbd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   2e231a74d5722       busybox-7fdf7869d9-cdh5l
	2b152d885c64b       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   6cdc04070cee5       kube-vip-ha-438604
	68450e531a890       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e111e127bd599       coredns-76f75df574-wqrvc
	8ca11829d6ed9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   c76f6fa586a15       kindnet-82krw
	f3470c370f90d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   627cb3e544fdb       coredns-76f75df574-7gpzq
	9f05730617753       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      5 minutes ago       Exited              kube-controller-manager   1                   7bb391b1586eb       kube-controller-manager-ha-438604
	abb0230fab47a       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      5 minutes ago       Running             kube-scheduler            1                   f0bb641e0019f       kube-scheduler-ha-438604
	ec3dd5d319504       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      5 minutes ago       Running             kube-proxy                1                   3f0df28139117       kube-proxy-v98zm
	c2a76ebe0ee11       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      5 minutes ago       Exited              kube-apiserver            2                   afc7832f7d6e5       kube-apiserver-ha-438604
	81299411c841d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   4b7048b6c7973       etcd-ha-438604
	11b291bd9a246       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   76f708d0734ed       busybox-7fdf7869d9-cdh5l
	f0cafcafceece       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   acf17bfe1f043       coredns-76f75df574-7gpzq
	63c0e178c3e78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   328834ce582ca       coredns-76f75df574-wqrvc
	a0bffd365d14f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      16 minutes ago      Exited              kube-proxy                0                   ffe693490c6c3       kube-proxy-v98zm
	982252ef21b29       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      16 minutes ago      Exited              kube-scheduler            0                   95acb68b16e77       kube-scheduler-ha-438604
	532fccde459b9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   681a212174e36       etcd-ha-438604
	
	
	==> coredns [63c0e178c3e785052f9a2b38486423f1401693cd3990e8365f3d7d05b0b3d938] <==
	[INFO] 10.244.1.2:54572 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121059s
	[INFO] 10.244.0.4:55733 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248755s
	[INFO] 10.244.0.4:44663 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046479s
	[INFO] 10.244.2.2:43313 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161932s
	[INFO] 10.244.2.2:36056 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115719s
	[INFO] 10.244.2.2:58531 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248815s
	[INFO] 10.244.1.2:40849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115353s
	[INFO] 10.244.1.2:51289 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105404s
	[INFO] 10.244.1.2:56814 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070319s
	[INFO] 10.244.0.4:35492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160626s
	[INFO] 10.244.0.4:34374 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082632s
	[INFO] 10.244.2.2:43756 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109569s
	[INFO] 10.244.2.2:45152 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124387s
	[INFO] 10.244.1.2:38830 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135636s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1923&timeout=8m32s&timeoutSeconds=512&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1927&timeout=8m17s&timeoutSeconds=497&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1958&timeout=9m3s&timeoutSeconds=543&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [68450e531a890d73f56e6d653f7d392b82e297c6ebd9c15b38f42101f40b63cc] <==
	Trace[645879881]: [10.581953501s] [10.581953501s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:49712->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f0cafcafceece973e0c8c993592b662258f84afba7538820016b8e231204414d] <==
	[INFO] 10.244.2.2:33817 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024307s
	[INFO] 10.244.2.2:53777 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096848s
	[INFO] 10.244.1.2:51257 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001970096s
	[INFO] 10.244.1.2:37927 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164729s
	[INFO] 10.244.1.2:46840 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074025s
	[INFO] 10.244.1.2:40034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116336s
	[INFO] 10.244.1.2:46524 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110431s
	[INFO] 10.244.0.4:47504 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116612s
	[INFO] 10.244.0.4:52704 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105138s
	[INFO] 10.244.2.2:40699 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000199266s
	[INFO] 10.244.1.2:46666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009956s
	[INFO] 10.244.0.4:57492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119263s
	[INFO] 10.244.0.4:45362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139004s
	[INFO] 10.244.2.2:58706 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239864s
	[INFO] 10.244.2.2:32981 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128008s
	[INFO] 10.244.1.2:38182 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167786s
	[INFO] 10.244.1.2:44324 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000206004s
	[INFO] 10.244.1.2:37810 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013702s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1958&timeout=8m37s&timeoutSeconds=517&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3470c370f90d95386625f23f3b4851fd0e9d54584a858928866a3f932d8462a] <==
	Trace[316290978]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59182->10.96.0.1:443: read: connection reset by peer 10243ms (11:46:38.822)
	Trace[316290978]: [10.243418209s] [10.243418209s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59182->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1347173231]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (08-Apr-2024 11:46:28.378) (total time: 10443ms):
	Trace[1347173231]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59154->10.96.0.1:443: read: connection reset by peer 10443ms (11:46:38.822)
	Trace[1347173231]: [10.443989451s] [10.443989451s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59154->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35288->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:35288->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-438604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T11_34_48_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:34:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:51:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:47:00 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:47:00 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:47:00 +0000   Mon, 08 Apr 2024 11:34:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:47:00 +0000   Mon, 08 Apr 2024 11:35:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    ha-438604
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d242cef9ed484660b2c31aeed7e51ff5
	  System UUID:                d242cef9-ed48-4660-b2c3-1aeed7e51ff5
	  Boot ID:                    336ee057-2212-4601-ad25-56ebfd2bc06e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-cdh5l             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-76f75df574-7gpzq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-wqrvc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-438604                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-82krw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-438604             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-438604    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-v98zm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-438604             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-438604                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m19s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-438604 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-438604 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-438604 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-438604 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-438604 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-438604 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-438604 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Warning  ContainerGCFailed        5m30s (x2 over 6m30s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m4s                   node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-438604 event: Registered Node ha-438604 in Controller
	
	
	Name:               ha-438604-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_36_27_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:36:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:51:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 11:47:38 +0000   Mon, 08 Apr 2024 11:46:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 11:47:38 +0000   Mon, 08 Apr 2024 11:46:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 11:47:38 +0000   Mon, 08 Apr 2024 11:46:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 11:47:38 +0000   Mon, 08 Apr 2024 11:46:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    ha-438604-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 957d2c54c49c48d0b297f4467d1bac27
	  System UUID:                957d2c54-c49c-48d0-b297-f4467d1bac27
	  Boot ID:                    7918dd25-1555-4fc9-bbdf-e61f03277376
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jz4h9                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-438604-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-b5ztk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-438604-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-438604-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-5vc66                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-438604-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-438604-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node ha-438604-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node ha-438604-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node ha-438604-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                    node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  NodeReady                14m                    kubelet          Node ha-438604-m02 status is now: NodeReady
	  Normal  RegisteredNode           14m                    node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-438604-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  4m42s (x8 over 4m42s)  kubelet          Node ha-438604-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m42s (x8 over 4m42s)  kubelet          Node ha-438604-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x7 over 4m42s)  kubelet          Node ha-438604-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-438604-m02 event: Registered Node ha-438604-m02 in Controller
	
	
	Name:               ha-438604-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-438604-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=ha-438604
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T11_38_38_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 11:38:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-438604-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 11:48:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Apr 2024 11:48:29 +0000   Mon, 08 Apr 2024 11:49:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Apr 2024 11:48:29 +0000   Mon, 08 Apr 2024 11:49:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Apr 2024 11:48:29 +0000   Mon, 08 Apr 2024 11:49:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Apr 2024 11:48:29 +0000   Mon, 08 Apr 2024 11:49:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    ha-438604-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0df153c018eb4bd3bce7e2132da5651e
	  System UUID:                0df153c0-18eb-4bd3-bce7-e2132da5651e
	  Boot ID:                    aa2279bb-1c5b-4505-9645-4e27e21c0101
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-774pb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-8rrcs               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-2vmwq            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)      kubelet          Node ha-438604-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)      kubelet          Node ha-438604-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)      kubelet          Node ha-438604-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-438604-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m4s                   node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-438604-m04 event: Registered Node ha-438604-m04 in Controller
	  Normal   Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-438604-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-438604-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-438604-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-438604-m04 has been rebooted, boot id: aa2279bb-1c5b-4505-9645-4e27e21c0101
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-438604-m04 status is now: NodeReady
	  Normal   NodeNotReady             109s (x2 over 3m24s)   node-controller  Node ha-438604-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +11.215969] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.059868] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060056] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.165820] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.136183] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.312263] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.585051] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.065353] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.627952] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.807124] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.007059] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.588023] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[Apr 8 11:35] kauditd_printk_skb: 15 callbacks suppressed
	[Apr 8 11:36] kauditd_printk_skb: 78 callbacks suppressed
	[Apr 8 11:43] kauditd_printk_skb: 1 callbacks suppressed
	[Apr 8 11:46] systemd-fstab-generator[3886]: Ignoring "noauto" option for root device
	[  +0.094109] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067504] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +0.183172] systemd-fstab-generator[3912]: Ignoring "noauto" option for root device
	[  +0.165677] systemd-fstab-generator[3925]: Ignoring "noauto" option for root device
	[  +0.313219] systemd-fstab-generator[3953]: Ignoring "noauto" option for root device
	[  +1.227917] systemd-fstab-generator[4054]: Ignoring "noauto" option for root device
	[  +3.126783] kauditd_printk_skb: 127 callbacks suppressed
	[ +15.824835] kauditd_printk_skb: 75 callbacks suppressed
	[ +22.788409] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [532fccde459b9d385dde12d86bc2515625017168724914a8dab8ab5b2ca46e18] <==
	{"level":"info","ts":"2024-04-08T11:44:39.252855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:39.25295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:39.252965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 received MsgPreVoteResp from 3b7a74ffda0d9c54 at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:39.25298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 [logterm: 2, index: 2281] sent MsgPreVote request to 780efa3e7bded717 at term 2"}
	{"level":"info","ts":"2024-04-08T11:44:39.252987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 [logterm: 2, index: 2281] sent MsgPreVote request to 7ff681eaaadd5fcd at term 2"}
	{"level":"warn","ts":"2024-04-08T11:44:39.39443Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.99:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-08T11:44:39.394494Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.99:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-08T11:44:39.394691Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3b7a74ffda0d9c54","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-08T11:44:39.394865Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.394941Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.394996Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395141Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395209Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395324Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395337Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7ff681eaaadd5fcd"}
	{"level":"info","ts":"2024-04-08T11:44:39.395344Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395353Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395409Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395469Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395652Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395721Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.395733Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:44:39.40024Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2024-04-08T11:44:39.400365Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2024-04-08T11:44:39.400376Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-438604","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.99:2380"],"advertise-client-urls":["https://192.168.39.99:2379"]}
	
	
	==> etcd [81299411c841dc416878e411601ccbd0be7f1574e0065977109d3342c6915f0b] <==
	{"level":"info","ts":"2024-04-08T11:47:46.500631Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:47:46.509245Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:47:47.486853Z","caller":"traceutil/trace.go:171","msg":"trace[182480743] transaction","detail":"{read_only:false; response_revision:2395; number_of_response:1; }","duration":"147.557307ms","start":"2024-04-08T11:47:47.339272Z","end":"2024-04-08T11:47:47.48683Z","steps":["trace[182480743] 'process raft request'  (duration: 147.47306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T11:47:47.63997Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:47:47.640036Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"780efa3e7bded717","rtt":"0s","error":"dial tcp 192.168.39.94:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-08T11:48:00.851366Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.466113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-08T11:48:00.851599Z","caller":"traceutil/trace.go:171","msg":"trace[1280738319] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:2463; }","duration":"104.772328ms","start":"2024-04-08T11:48:00.746788Z","end":"2024-04-08T11:48:00.851561Z","steps":["trace[1280738319] 'count revisions from in-memory index tree'  (duration: 103.320612ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T11:48:01.839393Z","caller":"traceutil/trace.go:171","msg":"trace[814372312] transaction","detail":"{read_only:false; response_revision:2467; number_of_response:1; }","duration":"116.04189ms","start":"2024-04-08T11:48:01.723334Z","end":"2024-04-08T11:48:01.839376Z","steps":["trace[814372312] 'process raft request'  (duration: 115.936035ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T11:48:43.660126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 switched to configuration voters=(4285866637620255828 9220700131976634317)"}
	{"level":"info","ts":"2024-04-08T11:48:43.662611Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","removed-remote-peer-id":"780efa3e7bded717","removed-remote-peer-urls":["https://192.168.39.94:2380"]}
	{"level":"info","ts":"2024-04-08T11:48:43.66285Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"780efa3e7bded717"}
	{"level":"warn","ts":"2024-04-08T11:48:43.663108Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:48:43.663191Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"780efa3e7bded717"}
	{"level":"warn","ts":"2024-04-08T11:48:43.663348Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:48:43.663395Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:48:43.66375Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"warn","ts":"2024-04-08T11:48:43.664095Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717","error":"context canceled"}
	{"level":"warn","ts":"2024-04-08T11:48:43.664302Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"780efa3e7bded717","error":"failed to read 780efa3e7bded717 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-08T11:48:43.6644Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"warn","ts":"2024-04-08T11:48:43.664764Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717","error":"context canceled"}
	{"level":"info","ts":"2024-04-08T11:48:43.664836Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3b7a74ffda0d9c54","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:48:43.664877Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"780efa3e7bded717"}
	{"level":"info","ts":"2024-04-08T11:48:43.665039Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"3b7a74ffda0d9c54","removed-remote-peer-id":"780efa3e7bded717"}
	{"level":"warn","ts":"2024-04-08T11:48:43.686272Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3b7a74ffda0d9c54","remote-peer-id-stream-handler":"3b7a74ffda0d9c54","remote-peer-id-from":"780efa3e7bded717"}
	{"level":"warn","ts":"2024-04-08T11:48:43.693075Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.94:39528","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:51:18 up 17 min,  0 users,  load average: 0.17, 0.58, 0.44
	Linux ha-438604 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8ca11829d6ed952fe383d30736c792f272e9bd4adee959e5c1b3f70e8563836f] <==
	I0408 11:46:17.183354       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0408 11:46:17.183454       1 main.go:107] hostIP = 192.168.39.99
	podIP = 192.168.39.99
	I0408 11:46:17.183709       1 main.go:116] setting mtu 1500 for CNI 
	I0408 11:46:17.183734       1 main.go:146] kindnetd IP family: "ipv4"
	I0408 11:46:17.183759       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0408 11:46:27.383478       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0408 11:46:37.384725       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0408 11:46:39.126332       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0408 11:46:41.127989       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0408 11:46:44.128792       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [ea03841cefbee93fff9af121b4c618a9166f9126f598b0de9692f8564ea15af9] <==
	I0408 11:50:37.759222       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:50:47.773286       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:50:47.773614       1 main.go:227] handling current node
	I0408 11:50:47.773704       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:50:47.773734       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:50:47.773886       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:50:47.773907       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:50:57.780859       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:50:57.781099       1 main.go:227] handling current node
	I0408 11:50:57.781144       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:50:57.781166       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:50:57.781304       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:50:57.781326       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:51:07.795047       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:51:07.795216       1 main.go:227] handling current node
	I0408 11:51:07.795304       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:51:07.795355       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:51:07.795629       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:51:07.795688       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	I0408 11:51:17.802686       1 main.go:223] Handling node with IPs: map[192.168.39.99:{}]
	I0408 11:51:17.802707       1 main.go:227] handling current node
	I0408 11:51:17.802724       1 main.go:223] Handling node with IPs: map[192.168.39.219:{}]
	I0408 11:51:17.802728       1 main.go:250] Node ha-438604-m02 has CIDR [10.244.1.0/24] 
	I0408 11:51:17.802844       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0408 11:51:17.802849       1 main.go:250] Node ha-438604-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8b2b8a7eb724b502c92d105fbf3d64abef74739bc04382e50c489d60d90aefe6] <==
	I0408 11:46:57.407189       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0408 11:46:57.407222       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0408 11:46:57.407260       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0408 11:46:57.407279       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0408 11:46:57.409717       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0408 11:46:57.484285       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0408 11:46:57.490661       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 11:46:57.505672       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0408 11:46:57.506464       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0408 11:46:57.506706       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0408 11:46:57.507048       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0408 11:46:57.508830       1 shared_informer.go:318] Caches are synced for configmaps
	I0408 11:46:57.509942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0408 11:46:57.510222       1 aggregator.go:165] initial CRD sync complete...
	I0408 11:46:57.510696       1 autoregister_controller.go:141] Starting autoregister controller
	I0408 11:46:57.510735       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 11:46:57.510759       1 cache.go:39] Caches are synced for autoregister controller
	I0408 11:46:57.511677       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	W0408 11:46:57.555365       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94]
	I0408 11:46:57.558745       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 11:46:57.587741       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0408 11:46:57.596637       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0408 11:46:58.423170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0408 11:46:58.954580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.94 192.168.39.99]
	W0408 11:48:58.959061       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.219 192.168.39.99]
	
	
	==> kube-apiserver [c2a76ebe0ee1107cf7f085f5d2426d238106e442cb024b5bc3738a1d06159b3e] <==
	I0408 11:46:17.164042       1 options.go:222] external host was not specified, using 192.168.39.99
	I0408 11:46:17.182498       1 server.go:148] Version: v1.29.3
	I0408 11:46:17.184187       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:46:17.809899       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0408 11:46:17.813885       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0408 11:46:17.813924       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0408 11:46:17.814225       1 instance.go:297] Using reconciler: lease
	W0408 11:46:37.805114       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0408 11:46:37.807624       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0408 11:46:37.817779       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [9b43ce15065cf3bfd1edfa456126029455bd1f9d6fcfdba807e0b5713cc8826a] <==
	I0408 11:48:42.402057       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.786µs"
	I0408 11:48:42.527071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="59.367µs"
	I0408 11:48:42.544611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="169.758µs"
	I0408 11:48:42.550651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="175.337µs"
	I0408 11:48:43.498589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="15.505166ms"
	I0408 11:48:43.498691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.588µs"
	I0408 11:48:55.358584       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-438604-m04"
	E0408 11:48:55.402448       1 garbagecollector.go:408] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-438604-m03", UID:"76a1b3c2-1701-421b-87bf-40df7509d6d4", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-438604-m03", UID:"c4e8079c-090e-4ec8-bed3-758a2b55d94c", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-438604-m03" not found
	I0408 11:48:59.601890       1 event.go:376] "Event occurred" object="ha-438604-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-438604-m03 event: Removing Node ha-438604-m03 from Controller"
	E0408 11:49:14.462112       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	E0408 11:49:14.462220       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	E0408 11:49:14.462236       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	E0408 11:49:14.462243       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	E0408 11:49:14.462252       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	I0408 11:49:29.620749       1 event.go:376] "Event occurred" object="ha-438604-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-438604-m04 status is now: NodeNotReady"
	I0408 11:49:29.652375       1 event.go:376] "Event occurred" object="kube-system/kindnet-8rrcs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 11:49:29.674926       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-2vmwq" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 11:49:29.690838       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-774pb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 11:49:29.715150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="23.845164ms"
	I0408 11:49:29.716367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="33.814µs"
	E0408 11:49:34.463366       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	E0408 11:49:34.463611       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	E0408 11:49:34.463646       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	E0408 11:49:34.463670       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	E0408 11:49:34.463694       1 gc_controller.go:153] "Failed to get node" err="node \"ha-438604-m03\" not found" node="ha-438604-m03"
	
	
	==> kube-controller-manager [9f057306177534260b414dc92037b9087d1e6bf5949f74a1ed05097a91ea6b38] <==
	I0408 11:46:18.030280       1 serving.go:380] Generated self-signed cert in-memory
	I0408 11:46:18.306917       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0408 11:46:18.306964       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:46:18.308813       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0408 11:46:18.309204       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0408 11:46:18.310134       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0408 11:46:18.310162       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0408 11:46:38.827167       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.99:8443/healthz\": dial tcp 192.168.39.99:8443: connect: connection refused"
	
	
	==> kube-proxy [a0bffd365d14f7b38e24ab374aa8b73ed87c121f1b9d988d4b8109c191fe9119] <==
	E0408 11:43:24.246691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:27.318076       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:27.318204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:27.318081       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:27.318262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:30.390062       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:30.390204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:33.462962       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:33.463096       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:33.463418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:33.463451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:36.535471       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:36.535583       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:42.679691       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:42.679762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:45.750337       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:45.750417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:43:48.822683       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:43:48.822733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:44:01.110841       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:44:01.110983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1877": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:44:01.111805       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:44:01.111935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1860": dial tcp 192.168.39.254:8443: connect: no route to host
	W0408 11:44:10.327582       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0408 11:44:10.327653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-438604&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [ec3dd5d319504393b1dcb9339da325aadff264ee9d15d78ec9a7cd0f85e68bc3] <==
	I0408 11:46:18.315345       1 server_others.go:72] "Using iptables proxy"
	E0408 11:46:19.350078       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0408 11:46:22.423096       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0408 11:46:25.494114       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0408 11:46:31.638297       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0408 11:46:40.855172       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-438604\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0408 11:46:58.432669       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	I0408 11:46:58.517415       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 11:46:58.517502       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 11:46:58.517903       1 server_others.go:168] "Using iptables Proxier"
	I0408 11:46:58.527703       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 11:46:58.532997       1 server.go:865] "Version info" version="v1.29.3"
	I0408 11:46:58.534476       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 11:46:58.536099       1 config.go:188] "Starting service config controller"
	I0408 11:46:58.536231       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 11:46:58.536435       1 config.go:97] "Starting endpoint slice config controller"
	I0408 11:46:58.536506       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 11:46:58.538905       1 config.go:315] "Starting node config controller"
	I0408 11:46:58.539007       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 11:46:58.637606       1 shared_informer.go:318] Caches are synced for service config
	I0408 11:46:58.639288       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 11:46:58.639987       1 shared_informer.go:318] Caches are synced for node config
	W0408 11:49:43.538110       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0408 11:49:43.538109       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0408 11:49:43.538201       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [982252ef21b29050959b74456febb10c1c1728ebd8b581f0e347a28f2fd4fc6a] <==
	E0408 11:44:30.936727       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:31.013765       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 11:44:31.013907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 11:44:31.432981       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 11:44:31.433078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 11:44:35.145634       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 11:44:35.145782       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 11:44:35.243702       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 11:44:35.243822       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:35.579374       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 11:44:35.579501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0408 11:44:35.907693       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 11:44:35.907817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 11:44:36.069933       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 11:44:36.070048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 11:44:36.592786       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 11:44:36.592919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 11:44:38.948931       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 11:44:38.948964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 11:44:39.060629       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 11:44:39.060660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0408 11:44:39.094779       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0408 11:44:39.094927       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0408 11:44:39.095228       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0408 11:44:39.098865       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [abb0230fab47a33f4b085e82efc91757e6ebc01e86ef7887e80e222289297ada] <==
	W0408 11:46:47.677402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://192.168.39.99:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:47.677655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.99:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:47.739255       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.99:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:47.739324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.99:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:48.939042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.99:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:48.939128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.99:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:48.988462       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.99:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:48.988508       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.99:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:52.340379       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.99:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:52.340632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.99:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:52.879856       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.99:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:52.880008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.99:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:54.196674       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.99:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	E0408 11:46:54.196750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.99:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.99:8443: connect: connection refused
	W0408 11:46:57.427412       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 11:46:57.429388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 11:46:57.457809       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 11:46:57.457910       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 11:46:57.458073       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 11:46:57.458493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0408 11:46:58.536892       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0408 11:48:40.327289       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-774pb\": pod busybox-7fdf7869d9-774pb is already assigned to node \"ha-438604-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-774pb" node="ha-438604-m04"
	E0408 11:48:40.327410       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod a6a676c4-338b-4198-b300-8bed45406a73(default/busybox-7fdf7869d9-774pb) wasn't assumed so cannot be forgotten"
	E0408 11:48:40.327507       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-774pb\": pod busybox-7fdf7869d9-774pb is already assigned to node \"ha-438604-m04\"" pod="default/busybox-7fdf7869d9-774pb"
	I0408 11:48:40.329372       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-774pb" node="ha-438604-m04"
	
	
	==> kubelet <==
	Apr 08 11:49:39 ha-438604 kubelet[1376]: E0408 11:49:39.441068    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:49:48 ha-438604 kubelet[1376]: E0408 11:49:48.493423    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:49:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:49:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:49:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:49:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:49:53 ha-438604 kubelet[1376]: I0408 11:49:53.440128    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:49:53 ha-438604 kubelet[1376]: E0408 11:49:53.440410    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:50:06 ha-438604 kubelet[1376]: I0408 11:50:06.440350    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:50:06 ha-438604 kubelet[1376]: E0408 11:50:06.441059    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:50:17 ha-438604 kubelet[1376]: I0408 11:50:17.440180    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:50:17 ha-438604 kubelet[1376]: E0408 11:50:17.440492    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:50:29 ha-438604 kubelet[1376]: I0408 11:50:29.439741    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:50:29 ha-438604 kubelet[1376]: E0408 11:50:29.440021    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:50:44 ha-438604 kubelet[1376]: I0408 11:50:44.440082    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:50:44 ha-438604 kubelet[1376]: E0408 11:50:44.440429    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:50:48 ha-438604 kubelet[1376]: E0408 11:50:48.494659    1376 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 11:50:48 ha-438604 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 11:50:48 ha-438604 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 11:50:48 ha-438604 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 11:50:48 ha-438604 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 11:50:57 ha-438604 kubelet[1376]: I0408 11:50:57.440217    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:50:57 ha-438604 kubelet[1376]: E0408 11:50:57.440935    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	Apr 08 11:51:10 ha-438604 kubelet[1376]: I0408 11:51:10.440426    1376 scope.go:117] "RemoveContainer" containerID="3a82aec0e7a5967059fdd0d4cd9ce1c5825ea1291187a837fc57234e7276428f"
	Apr 08 11:51:10 ha-438604 kubelet[1376]: E0408 11:51:10.441139    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(46a902f5-0192-4a86-bfe4-4b4d663402c1)\"" pod="kube-system/storage-provisioner" podUID="46a902f5-0192-4a86-bfe4-4b4d663402c1"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 11:51:17.327913  394481 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18588-368424/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-438604 -n ha-438604
helpers_test.go:261: (dbg) Run:  kubectl --context ha-438604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (305.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-830937
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-830937
E0408 12:06:47.589155  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 12:08:06.833159  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 12:08:44.544529  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-830937: exit status 82 (2m2.741123501s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-830937-m03"  ...
	* Stopping node "multinode-830937-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-830937" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-830937 --wait=true -v=8 --alsologtostderr
E0408 12:11:09.878250  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-830937 --wait=true -v=8 --alsologtostderr: (3m0.002803931s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-830937
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-830937 -n multinode-830937
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-830937 logs -n 25: (1.685722137s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m02:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1863887303/001/cp-test_multinode-830937-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m02:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937:/home/docker/cp-test_multinode-830937-m02_multinode-830937.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n multinode-830937 sudo cat                                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_multinode-830937-m02_multinode-830937.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m02:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03:/home/docker/cp-test_multinode-830937-m02_multinode-830937-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n multinode-830937-m03 sudo cat                                   | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_multinode-830937-m02_multinode-830937-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp testdata/cp-test.txt                                                | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1863887303/001/cp-test_multinode-830937-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937:/home/docker/cp-test_multinode-830937-m03_multinode-830937.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n multinode-830937 sudo cat                                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_multinode-830937-m03_multinode-830937.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02:/home/docker/cp-test_multinode-830937-m03_multinode-830937-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n multinode-830937-m02 sudo cat                                   | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_multinode-830937-m03_multinode-830937-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-830937 node stop m03                                                          | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	| node    | multinode-830937 node start                                                             | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-830937                                                                | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC |                     |
	| stop    | -p multinode-830937                                                                     | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC |                     |
	| start   | -p multinode-830937                                                                     | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:08 UTC | 08 Apr 24 12:11 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-830937                                                                | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:11 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:08:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:08:47.725583  403745 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:08:47.726223  403745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:08:47.726245  403745 out.go:304] Setting ErrFile to fd 2...
	I0408 12:08:47.726253  403745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:08:47.726731  403745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:08:47.727715  403745 out.go:298] Setting JSON to false
	I0408 12:08:47.728711  403745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6671,"bootTime":1712571457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:08:47.728785  403745 start.go:139] virtualization: kvm guest
	I0408 12:08:47.731451  403745 out.go:177] * [multinode-830937] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:08:47.733071  403745 notify.go:220] Checking for updates...
	I0408 12:08:47.733085  403745 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:08:47.734820  403745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:08:47.736276  403745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:08:47.737446  403745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:08:47.738791  403745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:08:47.740002  403745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:08:47.741607  403745 config.go:182] Loaded profile config "multinode-830937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:08:47.741730  403745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:08:47.742372  403745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:08:47.742435  403745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:08:47.757790  403745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0408 12:08:47.758348  403745 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:08:47.759004  403745 main.go:141] libmachine: Using API Version  1
	I0408 12:08:47.759027  403745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:08:47.759419  403745 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:08:47.759670  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:08:47.796833  403745 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:08:47.798143  403745 start.go:297] selected driver: kvm2
	I0408 12:08:47.798163  403745 start.go:901] validating driver "kvm2" against &{Name:multinode-830937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-830937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:08:47.798302  403745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:08:47.798757  403745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:08:47.798862  403745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:08:47.815055  403745 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:08:47.815802  403745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:08:47.815881  403745 cni.go:84] Creating CNI manager for ""
	I0408 12:08:47.815897  403745 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0408 12:08:47.815983  403745 start.go:340] cluster config:
	{Name:multinode-830937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-830937 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:08:47.816118  403745 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:08:47.818011  403745 out.go:177] * Starting "multinode-830937" primary control-plane node in "multinode-830937" cluster
	I0408 12:08:47.819283  403745 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:08:47.819341  403745 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 12:08:47.819352  403745 cache.go:56] Caching tarball of preloaded images
	I0408 12:08:47.819440  403745 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:08:47.819452  403745 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 12:08:47.819581  403745 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/config.json ...
	I0408 12:08:47.819856  403745 start.go:360] acquireMachinesLock for multinode-830937: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:08:47.819920  403745 start.go:364] duration metric: took 29.457µs to acquireMachinesLock for "multinode-830937"
	I0408 12:08:47.819935  403745 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:08:47.819943  403745 fix.go:54] fixHost starting: 
	I0408 12:08:47.820194  403745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:08:47.820226  403745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:08:47.835866  403745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0408 12:08:47.836414  403745 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:08:47.836943  403745 main.go:141] libmachine: Using API Version  1
	I0408 12:08:47.836977  403745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:08:47.837413  403745 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:08:47.837674  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:08:47.837874  403745 main.go:141] libmachine: (multinode-830937) Calling .GetState
	I0408 12:08:47.839814  403745 fix.go:112] recreateIfNeeded on multinode-830937: state=Running err=<nil>
	W0408 12:08:47.839841  403745 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:08:47.842580  403745 out.go:177] * Updating the running kvm2 "multinode-830937" VM ...
	I0408 12:08:47.844375  403745 machine.go:94] provisionDockerMachine start ...
	I0408 12:08:47.844421  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:08:47.844803  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:47.847538  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:47.848089  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:47.848123  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:47.848416  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:47.848610  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:47.848792  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:47.848980  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:47.849216  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:08:47.849477  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:08:47.849495  403745 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:08:47.965312  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-830937
	
	I0408 12:08:47.965349  403745 main.go:141] libmachine: (multinode-830937) Calling .GetMachineName
	I0408 12:08:47.965617  403745 buildroot.go:166] provisioning hostname "multinode-830937"
	I0408 12:08:47.965653  403745 main.go:141] libmachine: (multinode-830937) Calling .GetMachineName
	I0408 12:08:47.965895  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:47.968381  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:47.968959  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:47.968994  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:47.969117  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:47.969335  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:47.969506  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:47.969668  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:47.969870  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:08:47.970060  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:08:47.970073  403745 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-830937 && echo "multinode-830937" | sudo tee /etc/hostname
	I0408 12:08:48.102952  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-830937
	
	I0408 12:08:48.102996  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:48.105974  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.106406  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.106446  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.106667  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:48.106875  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.107032  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.107157  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:48.107308  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:08:48.107534  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:08:48.107560  403745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-830937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-830937/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-830937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:08:48.221115  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:08:48.221150  403745 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:08:48.221169  403745 buildroot.go:174] setting up certificates
	I0408 12:08:48.221181  403745 provision.go:84] configureAuth start
	I0408 12:08:48.221189  403745 main.go:141] libmachine: (multinode-830937) Calling .GetMachineName
	I0408 12:08:48.221543  403745 main.go:141] libmachine: (multinode-830937) Calling .GetIP
	I0408 12:08:48.224185  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.224480  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.224511  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.224690  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:48.226699  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.226998  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.227021  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.227165  403745 provision.go:143] copyHostCerts
	I0408 12:08:48.227208  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:08:48.227238  403745 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:08:48.227254  403745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:08:48.227318  403745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:08:48.227405  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:08:48.227428  403745 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:08:48.227435  403745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:08:48.227458  403745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:08:48.227531  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:08:48.227555  403745 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:08:48.227561  403745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:08:48.227584  403745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:08:48.227646  403745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.multinode-830937 san=[127.0.0.1 192.168.39.209 localhost minikube multinode-830937]
	I0408 12:08:48.470465  403745 provision.go:177] copyRemoteCerts
	I0408 12:08:48.470539  403745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:08:48.470608  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:48.473428  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.473905  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.473942  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.474215  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:48.474453  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.474713  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:48.474914  403745 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:08:48.563654  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 12:08:48.563763  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:08:48.596776  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 12:08:48.596866  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0408 12:08:48.624649  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 12:08:48.624728  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:08:48.651547  403745 provision.go:87] duration metric: took 430.350036ms to configureAuth
	I0408 12:08:48.651588  403745 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:08:48.651874  403745 config.go:182] Loaded profile config "multinode-830937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:08:48.651967  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:48.655233  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.655634  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.655668  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.655854  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:48.656110  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.656286  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.656517  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:48.656748  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:08:48.656970  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:08:48.656985  403745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:10:19.438708  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:10:19.438749  403745 machine.go:97] duration metric: took 1m31.59434398s to provisionDockerMachine
	I0408 12:10:19.438769  403745 start.go:293] postStartSetup for "multinode-830937" (driver="kvm2")
	I0408 12:10:19.438786  403745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:10:19.438849  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.439244  403745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:10:19.439282  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:10:19.442802  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.443346  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.443381  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.443559  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:10:19.443793  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.443991  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:10:19.444159  403745 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:10:19.544501  403745 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:10:19.549079  403745 command_runner.go:130] > NAME=Buildroot
	I0408 12:10:19.549103  403745 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 12:10:19.549109  403745 command_runner.go:130] > ID=buildroot
	I0408 12:10:19.549116  403745 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 12:10:19.549136  403745 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 12:10:19.549405  403745 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:10:19.549427  403745 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:10:19.549499  403745 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:10:19.549600  403745 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:10:19.549614  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 12:10:19.549731  403745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:10:19.560135  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:10:19.586870  403745 start.go:296] duration metric: took 148.083692ms for postStartSetup
	I0408 12:10:19.586925  403745 fix.go:56] duration metric: took 1m31.766981525s for fixHost
	I0408 12:10:19.586951  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:10:19.589958  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.590431  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.590477  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.590631  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:10:19.590851  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.591033  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.591241  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:10:19.591401  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:10:19.591598  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:10:19.591624  403745 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:10:19.705001  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712578219.685087142
	
	I0408 12:10:19.705034  403745 fix.go:216] guest clock: 1712578219.685087142
	I0408 12:10:19.705046  403745 fix.go:229] Guest: 2024-04-08 12:10:19.685087142 +0000 UTC Remote: 2024-04-08 12:10:19.5869296 +0000 UTC m=+91.912507154 (delta=98.157542ms)
	I0408 12:10:19.705074  403745 fix.go:200] guest clock delta is within tolerance: 98.157542ms
	I0408 12:10:19.705095  403745 start.go:83] releasing machines lock for "multinode-830937", held for 1m31.885154805s
	I0408 12:10:19.705127  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.705419  403745 main.go:141] libmachine: (multinode-830937) Calling .GetIP
	I0408 12:10:19.708120  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.708658  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.708693  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.708861  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.709399  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.709606  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.709710  403745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:10:19.709752  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:10:19.709833  403745 ssh_runner.go:195] Run: cat /version.json
	I0408 12:10:19.709849  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:10:19.712528  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.712887  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.712935  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.712994  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.713072  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:10:19.713263  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.713403  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.713416  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:10:19.713430  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.713590  403745 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:10:19.713606  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:10:19.713858  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.714036  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:10:19.714186  403745 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:10:19.793140  403745 command_runner.go:130] > {"iso_version": "v1.33.0-1712138767-18566", "kicbase_version": "v0.0.43-1711559786-18485", "minikube_version": "v1.33.0-beta.0", "commit": "5c97bd855810b9924fd5c0368bb36a4a341f7234"}
	I0408 12:10:19.793445  403745 ssh_runner.go:195] Run: systemctl --version
	I0408 12:10:19.835614  403745 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0408 12:10:19.835679  403745 command_runner.go:130] > systemd 252 (252)
	I0408 12:10:19.835722  403745 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 12:10:19.835799  403745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:10:20.000529  403745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 12:10:20.006940  403745 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 12:10:20.007001  403745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:10:20.007053  403745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:10:20.018382  403745 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 12:10:20.018415  403745 start.go:494] detecting cgroup driver to use...
	I0408 12:10:20.018484  403745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:10:20.036437  403745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:10:20.051035  403745 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:10:20.051116  403745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:10:20.065936  403745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:10:20.081077  403745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:10:20.235818  403745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:10:20.379070  403745 docker.go:233] disabling docker service ...
	I0408 12:10:20.379161  403745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:10:20.398308  403745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:10:20.414085  403745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:10:20.554603  403745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:10:20.701792  403745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:10:20.716967  403745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:10:20.736604  403745 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0408 12:10:20.736984  403745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:10:20.737051  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.748395  403745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:10:20.748480  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.759991  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.771682  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.783013  403745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:10:20.795112  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.807071  403745 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.820107  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.831983  403745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:10:20.842888  403745 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 12:10:20.843002  403745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:10:20.853466  403745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:10:20.999514  403745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:10:21.262491  403745 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:10:21.262581  403745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:10:21.267793  403745 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0408 12:10:21.267822  403745 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0408 12:10:21.267831  403745 command_runner.go:130] > Device: 0,22	Inode: 1307        Links: 1
	I0408 12:10:21.267840  403745 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 12:10:21.267848  403745 command_runner.go:130] > Access: 2024-04-08 12:10:21.118310357 +0000
	I0408 12:10:21.267856  403745 command_runner.go:130] > Modify: 2024-04-08 12:10:21.118310357 +0000
	I0408 12:10:21.267863  403745 command_runner.go:130] > Change: 2024-04-08 12:10:21.118310357 +0000
	I0408 12:10:21.267869  403745 command_runner.go:130] >  Birth: -
	I0408 12:10:21.268001  403745 start.go:562] Will wait 60s for crictl version
	I0408 12:10:21.268052  403745 ssh_runner.go:195] Run: which crictl
	I0408 12:10:21.272093  403745 command_runner.go:130] > /usr/bin/crictl
	I0408 12:10:21.272176  403745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:10:21.315507  403745 command_runner.go:130] > Version:  0.1.0
	I0408 12:10:21.315542  403745 command_runner.go:130] > RuntimeName:  cri-o
	I0408 12:10:21.315548  403745 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0408 12:10:21.315556  403745 command_runner.go:130] > RuntimeApiVersion:  v1
	I0408 12:10:21.315579  403745 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:10:21.315643  403745 ssh_runner.go:195] Run: crio --version
	I0408 12:10:21.347531  403745 command_runner.go:130] > crio version 1.29.1
	I0408 12:10:21.347562  403745 command_runner.go:130] > Version:        1.29.1
	I0408 12:10:21.347575  403745 command_runner.go:130] > GitCommit:      unknown
	I0408 12:10:21.347582  403745 command_runner.go:130] > GitCommitDate:  unknown
	I0408 12:10:21.347587  403745 command_runner.go:130] > GitTreeState:   clean
	I0408 12:10:21.347598  403745 command_runner.go:130] > BuildDate:      2024-04-03T13:58:01Z
	I0408 12:10:21.347603  403745 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 12:10:21.347608  403745 command_runner.go:130] > Compiler:       gc
	I0408 12:10:21.347614  403745 command_runner.go:130] > Platform:       linux/amd64
	I0408 12:10:21.347621  403745 command_runner.go:130] > Linkmode:       dynamic
	I0408 12:10:21.347628  403745 command_runner.go:130] > BuildTags:      
	I0408 12:10:21.347635  403745 command_runner.go:130] >   containers_image_ostree_stub
	I0408 12:10:21.347642  403745 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 12:10:21.347650  403745 command_runner.go:130] >   btrfs_noversion
	I0408 12:10:21.347658  403745 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 12:10:21.347666  403745 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 12:10:21.347672  403745 command_runner.go:130] >   seccomp
	I0408 12:10:21.347680  403745 command_runner.go:130] > LDFlags:          unknown
	I0408 12:10:21.347701  403745 command_runner.go:130] > SeccompEnabled:   true
	I0408 12:10:21.347712  403745 command_runner.go:130] > AppArmorEnabled:  false
	I0408 12:10:21.347799  403745 ssh_runner.go:195] Run: crio --version
	I0408 12:10:21.378502  403745 command_runner.go:130] > crio version 1.29.1
	I0408 12:10:21.378537  403745 command_runner.go:130] > Version:        1.29.1
	I0408 12:10:21.378546  403745 command_runner.go:130] > GitCommit:      unknown
	I0408 12:10:21.378552  403745 command_runner.go:130] > GitCommitDate:  unknown
	I0408 12:10:21.378558  403745 command_runner.go:130] > GitTreeState:   clean
	I0408 12:10:21.378570  403745 command_runner.go:130] > BuildDate:      2024-04-03T13:58:01Z
	I0408 12:10:21.378575  403745 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 12:10:21.378578  403745 command_runner.go:130] > Compiler:       gc
	I0408 12:10:21.378582  403745 command_runner.go:130] > Platform:       linux/amd64
	I0408 12:10:21.378587  403745 command_runner.go:130] > Linkmode:       dynamic
	I0408 12:10:21.378594  403745 command_runner.go:130] > BuildTags:      
	I0408 12:10:21.378601  403745 command_runner.go:130] >   containers_image_ostree_stub
	I0408 12:10:21.378609  403745 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 12:10:21.378615  403745 command_runner.go:130] >   btrfs_noversion
	I0408 12:10:21.378632  403745 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 12:10:21.378638  403745 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 12:10:21.378643  403745 command_runner.go:130] >   seccomp
	I0408 12:10:21.378724  403745 command_runner.go:130] > LDFlags:          unknown
	I0408 12:10:21.378759  403745 command_runner.go:130] > SeccompEnabled:   true
	I0408 12:10:21.378767  403745 command_runner.go:130] > AppArmorEnabled:  false
	I0408 12:10:21.380809  403745 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:10:21.382388  403745 main.go:141] libmachine: (multinode-830937) Calling .GetIP
	I0408 12:10:21.384892  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:21.385368  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:21.385400  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:21.385617  403745 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:10:21.390567  403745 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0408 12:10:21.390690  403745 kubeadm.go:877] updating cluster {Name:multinode-830937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-830937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:10:21.390876  403745 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:10:21.390949  403745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:10:21.440570  403745 command_runner.go:130] > {
	I0408 12:10:21.440599  403745 command_runner.go:130] >   "images": [
	I0408 12:10:21.440605  403745 command_runner.go:130] >     {
	I0408 12:10:21.440623  403745 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0408 12:10:21.440630  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.440639  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0408 12:10:21.440644  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440651  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.440664  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0408 12:10:21.440679  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0408 12:10:21.440688  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440696  403745 command_runner.go:130] >       "size": "65291810",
	I0408 12:10:21.440706  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.440716  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.440736  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.440745  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.440752  403745 command_runner.go:130] >     },
	I0408 12:10:21.440761  403745 command_runner.go:130] >     {
	I0408 12:10:21.440772  403745 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0408 12:10:21.440790  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.440801  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0408 12:10:21.440814  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440824  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.440839  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0408 12:10:21.440854  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0408 12:10:21.440863  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440870  403745 command_runner.go:130] >       "size": "1363676",
	I0408 12:10:21.440881  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.440896  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.440905  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.440912  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.440922  403745 command_runner.go:130] >     },
	I0408 12:10:21.440929  403745 command_runner.go:130] >     {
	I0408 12:10:21.440943  403745 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 12:10:21.440950  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.440962  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 12:10:21.440970  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440978  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.440994  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 12:10:21.441016  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 12:10:21.441030  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441037  403745 command_runner.go:130] >       "size": "31470524",
	I0408 12:10:21.441043  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.441050  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441060  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441069  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441078  403745 command_runner.go:130] >     },
	I0408 12:10:21.441085  403745 command_runner.go:130] >     {
	I0408 12:10:21.441099  403745 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0408 12:10:21.441108  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441117  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0408 12:10:21.441125  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441138  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441153  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0408 12:10:21.441177  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0408 12:10:21.441186  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441194  403745 command_runner.go:130] >       "size": "61245718",
	I0408 12:10:21.441204  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.441213  403745 command_runner.go:130] >       "username": "nonroot",
	I0408 12:10:21.441221  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441231  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441239  403745 command_runner.go:130] >     },
	I0408 12:10:21.441247  403745 command_runner.go:130] >     {
	I0408 12:10:21.441257  403745 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0408 12:10:21.441267  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441275  403745 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0408 12:10:21.441283  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441290  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441305  403745 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0408 12:10:21.441319  403745 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0408 12:10:21.441328  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441335  403745 command_runner.go:130] >       "size": "150779692",
	I0408 12:10:21.441345  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.441353  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.441375  403745 command_runner.go:130] >       },
	I0408 12:10:21.441393  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441410  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441419  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441425  403745 command_runner.go:130] >     },
	I0408 12:10:21.441432  403745 command_runner.go:130] >     {
	I0408 12:10:21.441445  403745 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0408 12:10:21.441455  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441464  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0408 12:10:21.441477  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441487  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441501  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0408 12:10:21.441518  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0408 12:10:21.441522  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441528  403745 command_runner.go:130] >       "size": "128508878",
	I0408 12:10:21.441534  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.441540  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.441546  403745 command_runner.go:130] >       },
	I0408 12:10:21.441552  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441562  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441568  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441576  403745 command_runner.go:130] >     },
	I0408 12:10:21.441579  403745 command_runner.go:130] >     {
	I0408 12:10:21.441588  403745 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0408 12:10:21.441592  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441599  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0408 12:10:21.441603  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441608  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441616  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0408 12:10:21.441626  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0408 12:10:21.441636  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441642  403745 command_runner.go:130] >       "size": "123142962",
	I0408 12:10:21.441647  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.441654  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.441663  403745 command_runner.go:130] >       },
	I0408 12:10:21.441680  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441691  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441705  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441714  403745 command_runner.go:130] >     },
	I0408 12:10:21.441718  403745 command_runner.go:130] >     {
	I0408 12:10:21.441729  403745 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0408 12:10:21.441739  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441748  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0408 12:10:21.441754  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441763  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441795  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0408 12:10:21.441809  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0408 12:10:21.441817  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441826  403745 command_runner.go:130] >       "size": "83634073",
	I0408 12:10:21.441834  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.441841  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441848  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441853  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441858  403745 command_runner.go:130] >     },
	I0408 12:10:21.441863  403745 command_runner.go:130] >     {
	I0408 12:10:21.441871  403745 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0408 12:10:21.441877  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441883  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0408 12:10:21.441888  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441893  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441903  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0408 12:10:21.441915  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0408 12:10:21.441920  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441926  403745 command_runner.go:130] >       "size": "60724018",
	I0408 12:10:21.441931  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.441941  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.441947  403745 command_runner.go:130] >       },
	I0408 12:10:21.441956  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441962  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441971  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441979  403745 command_runner.go:130] >     },
	I0408 12:10:21.441984  403745 command_runner.go:130] >     {
	I0408 12:10:21.441997  403745 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0408 12:10:21.442016  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.442026  403745 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0408 12:10:21.442034  403745 command_runner.go:130] >       ],
	I0408 12:10:21.442040  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.442051  403745 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0408 12:10:21.442061  403745 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0408 12:10:21.442066  403745 command_runner.go:130] >       ],
	I0408 12:10:21.442071  403745 command_runner.go:130] >       "size": "750414",
	I0408 12:10:21.442077  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.442083  403745 command_runner.go:130] >         "value": "65535"
	I0408 12:10:21.442088  403745 command_runner.go:130] >       },
	I0408 12:10:21.442094  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.442104  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.442111  403745 command_runner.go:130] >       "pinned": true
	I0408 12:10:21.442120  403745 command_runner.go:130] >     }
	I0408 12:10:21.442125  403745 command_runner.go:130] >   ]
	I0408 12:10:21.442130  403745 command_runner.go:130] > }
	I0408 12:10:21.442422  403745 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:10:21.442442  403745 crio.go:433] Images already preloaded, skipping extraction
	I0408 12:10:21.442514  403745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:10:21.482151  403745 command_runner.go:130] > {
	I0408 12:10:21.482181  403745 command_runner.go:130] >   "images": [
	I0408 12:10:21.482186  403745 command_runner.go:130] >     {
	I0408 12:10:21.482193  403745 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0408 12:10:21.482205  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482212  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0408 12:10:21.482215  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482220  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482234  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0408 12:10:21.482249  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0408 12:10:21.482256  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482263  403745 command_runner.go:130] >       "size": "65291810",
	I0408 12:10:21.482271  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.482275  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.482291  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482297  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482301  403745 command_runner.go:130] >     },
	I0408 12:10:21.482304  403745 command_runner.go:130] >     {
	I0408 12:10:21.482317  403745 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0408 12:10:21.482324  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482336  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0408 12:10:21.482345  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482351  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482365  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0408 12:10:21.482377  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0408 12:10:21.482383  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482387  403745 command_runner.go:130] >       "size": "1363676",
	I0408 12:10:21.482394  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.482404  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.482414  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482425  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482449  403745 command_runner.go:130] >     },
	I0408 12:10:21.482458  403745 command_runner.go:130] >     {
	I0408 12:10:21.482468  403745 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 12:10:21.482477  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482487  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 12:10:21.482496  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482507  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482522  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 12:10:21.482537  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 12:10:21.482552  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482562  403745 command_runner.go:130] >       "size": "31470524",
	I0408 12:10:21.482566  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.482570  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.482577  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482588  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482597  403745 command_runner.go:130] >     },
	I0408 12:10:21.482603  403745 command_runner.go:130] >     {
	I0408 12:10:21.482616  403745 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0408 12:10:21.482626  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482637  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0408 12:10:21.482645  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482652  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482660  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0408 12:10:21.482691  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0408 12:10:21.482703  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482709  403745 command_runner.go:130] >       "size": "61245718",
	I0408 12:10:21.482719  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.482728  403745 command_runner.go:130] >       "username": "nonroot",
	I0408 12:10:21.482738  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482747  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482756  403745 command_runner.go:130] >     },
	I0408 12:10:21.482766  403745 command_runner.go:130] >     {
	I0408 12:10:21.482778  403745 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0408 12:10:21.482788  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482798  403745 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0408 12:10:21.482806  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482816  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482824  403745 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0408 12:10:21.482837  403745 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0408 12:10:21.482847  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482856  403745 command_runner.go:130] >       "size": "150779692",
	I0408 12:10:21.482872  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.482881  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.482890  403745 command_runner.go:130] >       },
	I0408 12:10:21.482900  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.482911  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482919  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482928  403745 command_runner.go:130] >     },
	I0408 12:10:21.482937  403745 command_runner.go:130] >     {
	I0408 12:10:21.482950  403745 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0408 12:10:21.482960  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482971  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0408 12:10:21.482980  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482989  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483002  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0408 12:10:21.483014  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0408 12:10:21.483024  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483035  403745 command_runner.go:130] >       "size": "128508878",
	I0408 12:10:21.483044  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.483054  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.483063  403745 command_runner.go:130] >       },
	I0408 12:10:21.483073  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483082  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483089  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.483092  403745 command_runner.go:130] >     },
	I0408 12:10:21.483096  403745 command_runner.go:130] >     {
	I0408 12:10:21.483110  403745 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0408 12:10:21.483120  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.483132  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0408 12:10:21.483140  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483150  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483162  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0408 12:10:21.483174  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0408 12:10:21.483186  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483196  403745 command_runner.go:130] >       "size": "123142962",
	I0408 12:10:21.483213  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.483223  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.483231  403745 command_runner.go:130] >       },
	I0408 12:10:21.483240  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483250  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483259  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.483273  403745 command_runner.go:130] >     },
	I0408 12:10:21.483281  403745 command_runner.go:130] >     {
	I0408 12:10:21.483295  403745 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0408 12:10:21.483305  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.483316  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0408 12:10:21.483326  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483336  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483363  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0408 12:10:21.483381  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0408 12:10:21.483386  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483391  403745 command_runner.go:130] >       "size": "83634073",
	I0408 12:10:21.483395  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.483401  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483408  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483414  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.483422  403745 command_runner.go:130] >     },
	I0408 12:10:21.483428  403745 command_runner.go:130] >     {
	I0408 12:10:21.483447  403745 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0408 12:10:21.483457  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.483467  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0408 12:10:21.483477  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483485  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483500  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0408 12:10:21.483516  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0408 12:10:21.483524  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483532  403745 command_runner.go:130] >       "size": "60724018",
	I0408 12:10:21.483541  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.483549  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.483557  403745 command_runner.go:130] >       },
	I0408 12:10:21.483564  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483571  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483580  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.483588  403745 command_runner.go:130] >     },
	I0408 12:10:21.483597  403745 command_runner.go:130] >     {
	I0408 12:10:21.483608  403745 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0408 12:10:21.483618  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.483635  403745 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0408 12:10:21.483649  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483657  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483672  403745 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0408 12:10:21.483701  403745 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0408 12:10:21.483710  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483718  403745 command_runner.go:130] >       "size": "750414",
	I0408 12:10:21.483726  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.483734  403745 command_runner.go:130] >         "value": "65535"
	I0408 12:10:21.483742  403745 command_runner.go:130] >       },
	I0408 12:10:21.483749  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483760  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483769  403745 command_runner.go:130] >       "pinned": true
	I0408 12:10:21.483775  403745 command_runner.go:130] >     }
	I0408 12:10:21.483781  403745 command_runner.go:130] >   ]
	I0408 12:10:21.483789  403745 command_runner.go:130] > }
	I0408 12:10:21.483927  403745 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:10:21.483941  403745 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:10:21.483952  403745 kubeadm.go:928] updating node { 192.168.39.209 8443 v1.29.3 crio true true} ...
	I0408 12:10:21.484088  403745 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-830937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-830937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:10:21.484170  403745 ssh_runner.go:195] Run: crio config
	I0408 12:10:21.518981  403745 command_runner.go:130] ! time="2024-04-08 12:10:21.499072989Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0408 12:10:21.525800  403745 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0408 12:10:21.533738  403745 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0408 12:10:21.533769  403745 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0408 12:10:21.533780  403745 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0408 12:10:21.533786  403745 command_runner.go:130] > #
	I0408 12:10:21.533796  403745 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0408 12:10:21.533806  403745 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0408 12:10:21.533821  403745 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0408 12:10:21.533831  403745 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0408 12:10:21.533841  403745 command_runner.go:130] > # reload'.
	I0408 12:10:21.533855  403745 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0408 12:10:21.533868  403745 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0408 12:10:21.533882  403745 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0408 12:10:21.533892  403745 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0408 12:10:21.533901  403745 command_runner.go:130] > [crio]
	I0408 12:10:21.533912  403745 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0408 12:10:21.533922  403745 command_runner.go:130] > # containers images, in this directory.
	I0408 12:10:21.533929  403745 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0408 12:10:21.533947  403745 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0408 12:10:21.533958  403745 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0408 12:10:21.533973  403745 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0408 12:10:21.533982  403745 command_runner.go:130] > # imagestore = ""
	I0408 12:10:21.533992  403745 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0408 12:10:21.534003  403745 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0408 12:10:21.534011  403745 command_runner.go:130] > storage_driver = "overlay"
	I0408 12:10:21.534016  403745 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0408 12:10:21.534026  403745 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0408 12:10:21.534035  403745 command_runner.go:130] > storage_option = [
	I0408 12:10:21.534046  403745 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0408 12:10:21.534054  403745 command_runner.go:130] > ]
	I0408 12:10:21.534067  403745 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0408 12:10:21.534080  403745 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0408 12:10:21.534090  403745 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0408 12:10:21.534101  403745 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0408 12:10:21.534107  403745 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0408 12:10:21.534116  403745 command_runner.go:130] > # always happen on a node reboot
	I0408 12:10:21.534128  403745 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0408 12:10:21.534152  403745 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0408 12:10:21.534164  403745 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0408 12:10:21.534177  403745 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0408 12:10:21.534186  403745 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0408 12:10:21.534197  403745 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0408 12:10:21.534216  403745 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0408 12:10:21.534227  403745 command_runner.go:130] > # internal_wipe = true
	I0408 12:10:21.534242  403745 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0408 12:10:21.534259  403745 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0408 12:10:21.534268  403745 command_runner.go:130] > # internal_repair = false
	I0408 12:10:21.534277  403745 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0408 12:10:21.534285  403745 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0408 12:10:21.534297  403745 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0408 12:10:21.534308  403745 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0408 12:10:21.534323  403745 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0408 12:10:21.534333  403745 command_runner.go:130] > [crio.api]
	I0408 12:10:21.534345  403745 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0408 12:10:21.534355  403745 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0408 12:10:21.534363  403745 command_runner.go:130] > # IP address on which the stream server will listen.
	I0408 12:10:21.534369  403745 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0408 12:10:21.534383  403745 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0408 12:10:21.534394  403745 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0408 12:10:21.534403  403745 command_runner.go:130] > # stream_port = "0"
	I0408 12:10:21.534415  403745 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0408 12:10:21.534424  403745 command_runner.go:130] > # stream_enable_tls = false
	I0408 12:10:21.534436  403745 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0408 12:10:21.534449  403745 command_runner.go:130] > # stream_idle_timeout = ""
	I0408 12:10:21.534460  403745 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0408 12:10:21.534474  403745 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0408 12:10:21.534482  403745 command_runner.go:130] > # minutes.
	I0408 12:10:21.534489  403745 command_runner.go:130] > # stream_tls_cert = ""
	I0408 12:10:21.534501  403745 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0408 12:10:21.534513  403745 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0408 12:10:21.534523  403745 command_runner.go:130] > # stream_tls_key = ""
	I0408 12:10:21.534533  403745 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0408 12:10:21.534544  403745 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0408 12:10:21.534573  403745 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0408 12:10:21.534587  403745 command_runner.go:130] > # stream_tls_ca = ""
	I0408 12:10:21.534598  403745 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 12:10:21.534609  403745 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0408 12:10:21.534620  403745 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 12:10:21.534628  403745 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0408 12:10:21.534641  403745 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0408 12:10:21.534653  403745 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0408 12:10:21.534668  403745 command_runner.go:130] > [crio.runtime]
	I0408 12:10:21.534680  403745 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0408 12:10:21.534693  403745 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0408 12:10:21.534701  403745 command_runner.go:130] > # "nofile=1024:2048"
	I0408 12:10:21.534710  403745 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0408 12:10:21.534719  403745 command_runner.go:130] > # default_ulimits = [
	I0408 12:10:21.534729  403745 command_runner.go:130] > # ]
	I0408 12:10:21.534742  403745 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0408 12:10:21.534751  403745 command_runner.go:130] > # no_pivot = false
	I0408 12:10:21.534764  403745 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0408 12:10:21.534776  403745 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0408 12:10:21.534794  403745 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0408 12:10:21.534805  403745 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0408 12:10:21.534817  403745 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0408 12:10:21.534830  403745 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 12:10:21.534841  403745 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0408 12:10:21.534850  403745 command_runner.go:130] > # Cgroup setting for conmon
	I0408 12:10:21.534863  403745 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0408 12:10:21.534873  403745 command_runner.go:130] > conmon_cgroup = "pod"
	I0408 12:10:21.534882  403745 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0408 12:10:21.534892  403745 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0408 12:10:21.534906  403745 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 12:10:21.534915  403745 command_runner.go:130] > conmon_env = [
	I0408 12:10:21.534928  403745 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 12:10:21.534936  403745 command_runner.go:130] > ]
	I0408 12:10:21.534947  403745 command_runner.go:130] > # Additional environment variables to set for all the
	I0408 12:10:21.534957  403745 command_runner.go:130] > # containers. These are overridden if set in the
	I0408 12:10:21.534966  403745 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0408 12:10:21.534973  403745 command_runner.go:130] > # default_env = [
	I0408 12:10:21.534978  403745 command_runner.go:130] > # ]
	I0408 12:10:21.534991  403745 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0408 12:10:21.535005  403745 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0408 12:10:21.535014  403745 command_runner.go:130] > # selinux = false
	I0408 12:10:21.535025  403745 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0408 12:10:21.535038  403745 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0408 12:10:21.535049  403745 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0408 12:10:21.535063  403745 command_runner.go:130] > # seccomp_profile = ""
	I0408 12:10:21.535075  403745 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0408 12:10:21.535088  403745 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0408 12:10:21.535100  403745 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0408 12:10:21.535110  403745 command_runner.go:130] > # which might increase security.
	I0408 12:10:21.535120  403745 command_runner.go:130] > # This option is currently deprecated,
	I0408 12:10:21.535132  403745 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0408 12:10:21.535140  403745 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0408 12:10:21.535150  403745 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0408 12:10:21.535163  403745 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0408 12:10:21.535180  403745 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0408 12:10:21.535192  403745 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0408 12:10:21.535203  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.535213  403745 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0408 12:10:21.535224  403745 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0408 12:10:21.535235  403745 command_runner.go:130] > # the cgroup blockio controller.
	I0408 12:10:21.535245  403745 command_runner.go:130] > # blockio_config_file = ""
	I0408 12:10:21.535259  403745 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0408 12:10:21.535269  403745 command_runner.go:130] > # blockio parameters.
	I0408 12:10:21.535278  403745 command_runner.go:130] > # blockio_reload = false
	I0408 12:10:21.535291  403745 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0408 12:10:21.535301  403745 command_runner.go:130] > # irqbalance daemon.
	I0408 12:10:21.535311  403745 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0408 12:10:21.535320  403745 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0408 12:10:21.535334  403745 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0408 12:10:21.535347  403745 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0408 12:10:21.535359  403745 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0408 12:10:21.535376  403745 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0408 12:10:21.535387  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.535395  403745 command_runner.go:130] > # rdt_config_file = ""
	I0408 12:10:21.535400  403745 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0408 12:10:21.535410  403745 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0408 12:10:21.535470  403745 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0408 12:10:21.535482  403745 command_runner.go:130] > # separate_pull_cgroup = ""
	I0408 12:10:21.535489  403745 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0408 12:10:21.535499  403745 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0408 12:10:21.535517  403745 command_runner.go:130] > # will be added.
	I0408 12:10:21.535528  403745 command_runner.go:130] > # default_capabilities = [
	I0408 12:10:21.535537  403745 command_runner.go:130] > # 	"CHOWN",
	I0408 12:10:21.535547  403745 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0408 12:10:21.535555  403745 command_runner.go:130] > # 	"FSETID",
	I0408 12:10:21.535564  403745 command_runner.go:130] > # 	"FOWNER",
	I0408 12:10:21.535571  403745 command_runner.go:130] > # 	"SETGID",
	I0408 12:10:21.535575  403745 command_runner.go:130] > # 	"SETUID",
	I0408 12:10:21.535578  403745 command_runner.go:130] > # 	"SETPCAP",
	I0408 12:10:21.535588  403745 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0408 12:10:21.535597  403745 command_runner.go:130] > # 	"KILL",
	I0408 12:10:21.535606  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535620  403745 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0408 12:10:21.535633  403745 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0408 12:10:21.535646  403745 command_runner.go:130] > # add_inheritable_capabilities = false
	I0408 12:10:21.535657  403745 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0408 12:10:21.535665  403745 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 12:10:21.535671  403745 command_runner.go:130] > default_sysctls = [
	I0408 12:10:21.535681  403745 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0408 12:10:21.535696  403745 command_runner.go:130] > ]
	I0408 12:10:21.535705  403745 command_runner.go:130] > # List of devices on the host that a
	I0408 12:10:21.535716  403745 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0408 12:10:21.535723  403745 command_runner.go:130] > # allowed_devices = [
	I0408 12:10:21.535729  403745 command_runner.go:130] > # 	"/dev/fuse",
	I0408 12:10:21.535734  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535739  403745 command_runner.go:130] > # List of additional devices. specified as
	I0408 12:10:21.535751  403745 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0408 12:10:21.535769  403745 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0408 12:10:21.535782  403745 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 12:10:21.535791  403745 command_runner.go:130] > # additional_devices = [
	I0408 12:10:21.535799  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535811  403745 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0408 12:10:21.535820  403745 command_runner.go:130] > # cdi_spec_dirs = [
	I0408 12:10:21.535827  403745 command_runner.go:130] > # 	"/etc/cdi",
	I0408 12:10:21.535831  403745 command_runner.go:130] > # 	"/var/run/cdi",
	I0408 12:10:21.535839  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535859  403745 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0408 12:10:21.535872  403745 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0408 12:10:21.535881  403745 command_runner.go:130] > # Defaults to false.
	I0408 12:10:21.535892  403745 command_runner.go:130] > # device_ownership_from_security_context = false
	I0408 12:10:21.535904  403745 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0408 12:10:21.535914  403745 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0408 12:10:21.535922  403745 command_runner.go:130] > # hooks_dir = [
	I0408 12:10:21.535933  403745 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0408 12:10:21.535938  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535951  403745 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0408 12:10:21.535963  403745 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0408 12:10:21.535974  403745 command_runner.go:130] > # its default mounts from the following two files:
	I0408 12:10:21.535982  403745 command_runner.go:130] > #
	I0408 12:10:21.535994  403745 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0408 12:10:21.536003  403745 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0408 12:10:21.536013  403745 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0408 12:10:21.536022  403745 command_runner.go:130] > #
	I0408 12:10:21.536031  403745 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0408 12:10:21.536044  403745 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0408 12:10:21.536057  403745 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0408 12:10:21.536071  403745 command_runner.go:130] > #      only add mounts it finds in this file.
	I0408 12:10:21.536079  403745 command_runner.go:130] > #
	I0408 12:10:21.536085  403745 command_runner.go:130] > # default_mounts_file = ""
	I0408 12:10:21.536092  403745 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0408 12:10:21.536106  403745 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0408 12:10:21.536116  403745 command_runner.go:130] > pids_limit = 1024
	I0408 12:10:21.536129  403745 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0408 12:10:21.536142  403745 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0408 12:10:21.536154  403745 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0408 12:10:21.536168  403745 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0408 12:10:21.536174  403745 command_runner.go:130] > # log_size_max = -1
	I0408 12:10:21.536185  403745 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0408 12:10:21.536195  403745 command_runner.go:130] > # log_to_journald = false
	I0408 12:10:21.536208  403745 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0408 12:10:21.536219  403745 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0408 12:10:21.536226  403745 command_runner.go:130] > # Path to directory for container attach sockets.
	I0408 12:10:21.536243  403745 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0408 12:10:21.536254  403745 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0408 12:10:21.536261  403745 command_runner.go:130] > # bind_mount_prefix = ""
	I0408 12:10:21.536268  403745 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0408 12:10:21.536277  403745 command_runner.go:130] > # read_only = false
	I0408 12:10:21.536291  403745 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0408 12:10:21.536303  403745 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0408 12:10:21.536312  403745 command_runner.go:130] > # live configuration reload.
	I0408 12:10:21.536322  403745 command_runner.go:130] > # log_level = "info"
	I0408 12:10:21.536333  403745 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0408 12:10:21.536342  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.536349  403745 command_runner.go:130] > # log_filter = ""
	I0408 12:10:21.536358  403745 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0408 12:10:21.536374  403745 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0408 12:10:21.536383  403745 command_runner.go:130] > # separated by comma.
	I0408 12:10:21.536394  403745 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 12:10:21.536404  403745 command_runner.go:130] > # uid_mappings = ""
	I0408 12:10:21.536416  403745 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0408 12:10:21.536427  403745 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0408 12:10:21.536434  403745 command_runner.go:130] > # separated by comma.
	I0408 12:10:21.536449  403745 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 12:10:21.536462  403745 command_runner.go:130] > # gid_mappings = ""
	I0408 12:10:21.536475  403745 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0408 12:10:21.536486  403745 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 12:10:21.536499  403745 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 12:10:21.536512  403745 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 12:10:21.536519  403745 command_runner.go:130] > # minimum_mappable_uid = -1
	I0408 12:10:21.536528  403745 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0408 12:10:21.536540  403745 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 12:10:21.536553  403745 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 12:10:21.536568  403745 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 12:10:21.536577  403745 command_runner.go:130] > # minimum_mappable_gid = -1
	I0408 12:10:21.536589  403745 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0408 12:10:21.536601  403745 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0408 12:10:21.536610  403745 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0408 12:10:21.536619  403745 command_runner.go:130] > # ctr_stop_timeout = 30
	I0408 12:10:21.536638  403745 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0408 12:10:21.536651  403745 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0408 12:10:21.536662  403745 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0408 12:10:21.536676  403745 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0408 12:10:21.536685  403745 command_runner.go:130] > drop_infra_ctr = false
	I0408 12:10:21.536694  403745 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0408 12:10:21.536705  403745 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0408 12:10:21.536720  403745 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0408 12:10:21.536730  403745 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0408 12:10:21.536744  403745 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0408 12:10:21.536755  403745 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0408 12:10:21.536767  403745 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0408 12:10:21.536775  403745 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0408 12:10:21.536783  403745 command_runner.go:130] > # shared_cpuset = ""
	I0408 12:10:21.536796  403745 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0408 12:10:21.536807  403745 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0408 12:10:21.536817  403745 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0408 12:10:21.536831  403745 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0408 12:10:21.536841  403745 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0408 12:10:21.536852  403745 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0408 12:10:21.536864  403745 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0408 12:10:21.536873  403745 command_runner.go:130] > # enable_criu_support = false
	I0408 12:10:21.536885  403745 command_runner.go:130] > # Enable/disable the generation of the container,
	I0408 12:10:21.536897  403745 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0408 12:10:21.536907  403745 command_runner.go:130] > # enable_pod_events = false
	I0408 12:10:21.536919  403745 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 12:10:21.536931  403745 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 12:10:21.536942  403745 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0408 12:10:21.536950  403745 command_runner.go:130] > # default_runtime = "runc"
	I0408 12:10:21.536956  403745 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0408 12:10:21.536977  403745 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0408 12:10:21.536993  403745 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0408 12:10:21.537004  403745 command_runner.go:130] > # creation as a file is not desired either.
	I0408 12:10:21.537019  403745 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0408 12:10:21.537029  403745 command_runner.go:130] > # the hostname is being managed dynamically.
	I0408 12:10:21.537036  403745 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0408 12:10:21.537046  403745 command_runner.go:130] > # ]
	I0408 12:10:21.537060  403745 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0408 12:10:21.537073  403745 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0408 12:10:21.537085  403745 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0408 12:10:21.537097  403745 command_runner.go:130] > # Each entry in the table should follow the format:
	I0408 12:10:21.537105  403745 command_runner.go:130] > #
	I0408 12:10:21.537115  403745 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0408 12:10:21.537123  403745 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0408 12:10:21.537179  403745 command_runner.go:130] > # runtime_type = "oci"
	I0408 12:10:21.537191  403745 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0408 12:10:21.537199  403745 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0408 12:10:21.537207  403745 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0408 12:10:21.537212  403745 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0408 12:10:21.537220  403745 command_runner.go:130] > # monitor_env = []
	I0408 12:10:21.537231  403745 command_runner.go:130] > # privileged_without_host_devices = false
	I0408 12:10:21.537241  403745 command_runner.go:130] > # allowed_annotations = []
	I0408 12:10:21.537253  403745 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0408 12:10:21.537261  403745 command_runner.go:130] > # Where:
	I0408 12:10:21.537273  403745 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0408 12:10:21.537285  403745 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0408 12:10:21.537295  403745 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0408 12:10:21.537305  403745 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0408 12:10:21.537317  403745 command_runner.go:130] > #   in $PATH.
	I0408 12:10:21.537330  403745 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0408 12:10:21.537342  403745 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0408 12:10:21.537353  403745 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0408 12:10:21.537362  403745 command_runner.go:130] > #   state.
	I0408 12:10:21.537374  403745 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0408 12:10:21.537383  403745 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0408 12:10:21.537394  403745 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0408 12:10:21.537406  403745 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0408 12:10:21.537418  403745 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0408 12:10:21.537431  403745 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0408 12:10:21.537445  403745 command_runner.go:130] > #   The currently recognized values are:
	I0408 12:10:21.537458  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0408 12:10:21.537470  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0408 12:10:21.537486  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0408 12:10:21.537504  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0408 12:10:21.537519  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0408 12:10:21.537533  403745 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0408 12:10:21.537546  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0408 12:10:21.537556  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0408 12:10:21.537567  403745 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0408 12:10:21.537580  403745 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0408 12:10:21.537591  403745 command_runner.go:130] > #   deprecated option "conmon".
	I0408 12:10:21.537605  403745 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0408 12:10:21.537617  403745 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0408 12:10:21.537636  403745 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0408 12:10:21.537644  403745 command_runner.go:130] > #   should be moved to the container's cgroup
	I0408 12:10:21.537652  403745 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0408 12:10:21.537663  403745 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0408 12:10:21.537676  403745 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0408 12:10:21.537688  403745 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0408 12:10:21.537696  403745 command_runner.go:130] > #
	I0408 12:10:21.537707  403745 command_runner.go:130] > # Using the seccomp notifier feature:
	I0408 12:10:21.537717  403745 command_runner.go:130] > #
	I0408 12:10:21.537728  403745 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0408 12:10:21.537739  403745 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0408 12:10:21.537747  403745 command_runner.go:130] > #
	I0408 12:10:21.537760  403745 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0408 12:10:21.537773  403745 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0408 12:10:21.537780  403745 command_runner.go:130] > #
	I0408 12:10:21.537793  403745 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0408 12:10:21.537801  403745 command_runner.go:130] > # feature.
	I0408 12:10:21.537808  403745 command_runner.go:130] > #
	I0408 12:10:21.537815  403745 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0408 12:10:21.537826  403745 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0408 12:10:21.537840  403745 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0408 12:10:21.537853  403745 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0408 12:10:21.537865  403745 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0408 12:10:21.537873  403745 command_runner.go:130] > #
	I0408 12:10:21.537882  403745 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0408 12:10:21.537899  403745 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0408 12:10:21.537905  403745 command_runner.go:130] > #
	I0408 12:10:21.537914  403745 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0408 12:10:21.537927  403745 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0408 12:10:21.537935  403745 command_runner.go:130] > #
	I0408 12:10:21.537948  403745 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0408 12:10:21.537960  403745 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0408 12:10:21.537969  403745 command_runner.go:130] > # limitation.
	I0408 12:10:21.537980  403745 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0408 12:10:21.537988  403745 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0408 12:10:21.537996  403745 command_runner.go:130] > runtime_type = "oci"
	I0408 12:10:21.538002  403745 command_runner.go:130] > runtime_root = "/run/runc"
	I0408 12:10:21.538011  403745 command_runner.go:130] > runtime_config_path = ""
	I0408 12:10:21.538023  403745 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0408 12:10:21.538031  403745 command_runner.go:130] > monitor_cgroup = "pod"
	I0408 12:10:21.538041  403745 command_runner.go:130] > monitor_exec_cgroup = ""
	I0408 12:10:21.538050  403745 command_runner.go:130] > monitor_env = [
	I0408 12:10:21.538062  403745 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 12:10:21.538069  403745 command_runner.go:130] > ]
	I0408 12:10:21.538074  403745 command_runner.go:130] > privileged_without_host_devices = false
	I0408 12:10:21.538082  403745 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0408 12:10:21.538091  403745 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0408 12:10:21.538101  403745 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0408 12:10:21.538117  403745 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0408 12:10:21.538136  403745 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0408 12:10:21.538149  403745 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0408 12:10:21.538165  403745 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0408 12:10:21.538178  403745 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0408 12:10:21.538186  403745 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0408 12:10:21.538193  403745 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0408 12:10:21.538199  403745 command_runner.go:130] > # Example:
	I0408 12:10:21.538204  403745 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0408 12:10:21.538211  403745 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0408 12:10:21.538216  403745 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0408 12:10:21.538227  403745 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0408 12:10:21.538233  403745 command_runner.go:130] > # cpuset = 0
	I0408 12:10:21.538242  403745 command_runner.go:130] > # cpushares = "0-1"
	I0408 12:10:21.538251  403745 command_runner.go:130] > # Where:
	I0408 12:10:21.538262  403745 command_runner.go:130] > # The workload name is workload-type.
	I0408 12:10:21.538276  403745 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0408 12:10:21.538287  403745 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0408 12:10:21.538299  403745 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0408 12:10:21.538314  403745 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0408 12:10:21.538325  403745 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0408 12:10:21.538333  403745 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0408 12:10:21.538339  403745 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0408 12:10:21.538345  403745 command_runner.go:130] > # Default value is set to true
	I0408 12:10:21.538350  403745 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0408 12:10:21.538357  403745 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0408 12:10:21.538364  403745 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0408 12:10:21.538368  403745 command_runner.go:130] > # Default value is set to 'false'
	I0408 12:10:21.538375  403745 command_runner.go:130] > # disable_hostport_mapping = false
	I0408 12:10:21.538382  403745 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0408 12:10:21.538384  403745 command_runner.go:130] > #
	I0408 12:10:21.538390  403745 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0408 12:10:21.538395  403745 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0408 12:10:21.538401  403745 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0408 12:10:21.538406  403745 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0408 12:10:21.538414  403745 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0408 12:10:21.538418  403745 command_runner.go:130] > [crio.image]
	I0408 12:10:21.538423  403745 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0408 12:10:21.538428  403745 command_runner.go:130] > # default_transport = "docker://"
	I0408 12:10:21.538433  403745 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0408 12:10:21.538447  403745 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0408 12:10:21.538451  403745 command_runner.go:130] > # global_auth_file = ""
	I0408 12:10:21.538455  403745 command_runner.go:130] > # The image used to instantiate infra containers.
	I0408 12:10:21.538460  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.538464  403745 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0408 12:10:21.538470  403745 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0408 12:10:21.538478  403745 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0408 12:10:21.538485  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.538496  403745 command_runner.go:130] > # pause_image_auth_file = ""
	I0408 12:10:21.538512  403745 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0408 12:10:21.538520  403745 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0408 12:10:21.538528  403745 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0408 12:10:21.538534  403745 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0408 12:10:21.538541  403745 command_runner.go:130] > # pause_command = "/pause"
	I0408 12:10:21.538547  403745 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0408 12:10:21.538554  403745 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0408 12:10:21.538561  403745 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0408 12:10:21.538571  403745 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0408 12:10:21.538576  403745 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0408 12:10:21.538584  403745 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0408 12:10:21.538591  403745 command_runner.go:130] > # pinned_images = [
	I0408 12:10:21.538594  403745 command_runner.go:130] > # ]
	I0408 12:10:21.538602  403745 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0408 12:10:21.538611  403745 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0408 12:10:21.538617  403745 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0408 12:10:21.538624  403745 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0408 12:10:21.538631  403745 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0408 12:10:21.538635  403745 command_runner.go:130] > # signature_policy = ""
	I0408 12:10:21.538642  403745 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0408 12:10:21.538651  403745 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0408 12:10:21.538659  403745 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0408 12:10:21.538670  403745 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0408 12:10:21.538677  403745 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0408 12:10:21.538684  403745 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0408 12:10:21.538690  403745 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0408 12:10:21.538698  403745 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0408 12:10:21.538705  403745 command_runner.go:130] > # changing them here.
	I0408 12:10:21.538709  403745 command_runner.go:130] > # insecure_registries = [
	I0408 12:10:21.538715  403745 command_runner.go:130] > # ]
	I0408 12:10:21.538721  403745 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0408 12:10:21.538728  403745 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0408 12:10:21.538732  403745 command_runner.go:130] > # image_volumes = "mkdir"
	I0408 12:10:21.538737  403745 command_runner.go:130] > # Temporary directory to use for storing big files
	I0408 12:10:21.538743  403745 command_runner.go:130] > # big_files_temporary_dir = ""
	I0408 12:10:21.538748  403745 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0408 12:10:21.538764  403745 command_runner.go:130] > # CNI plugins.
	I0408 12:10:21.538770  403745 command_runner.go:130] > [crio.network]
	I0408 12:10:21.538778  403745 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0408 12:10:21.538787  403745 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0408 12:10:21.538793  403745 command_runner.go:130] > # cni_default_network = ""
	I0408 12:10:21.538798  403745 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0408 12:10:21.538805  403745 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0408 12:10:21.538810  403745 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0408 12:10:21.538816  403745 command_runner.go:130] > # plugin_dirs = [
	I0408 12:10:21.538820  403745 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0408 12:10:21.538825  403745 command_runner.go:130] > # ]
	I0408 12:10:21.538834  403745 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0408 12:10:21.538840  403745 command_runner.go:130] > [crio.metrics]
	I0408 12:10:21.538844  403745 command_runner.go:130] > # Globally enable or disable metrics support.
	I0408 12:10:21.538850  403745 command_runner.go:130] > enable_metrics = true
	I0408 12:10:21.538855  403745 command_runner.go:130] > # Specify enabled metrics collectors.
	I0408 12:10:21.538862  403745 command_runner.go:130] > # Per default all metrics are enabled.
	I0408 12:10:21.538874  403745 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0408 12:10:21.538887  403745 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0408 12:10:21.538896  403745 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0408 12:10:21.538903  403745 command_runner.go:130] > # metrics_collectors = [
	I0408 12:10:21.538906  403745 command_runner.go:130] > # 	"operations",
	I0408 12:10:21.538912  403745 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0408 12:10:21.538917  403745 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0408 12:10:21.538923  403745 command_runner.go:130] > # 	"operations_errors",
	I0408 12:10:21.538928  403745 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0408 12:10:21.538934  403745 command_runner.go:130] > # 	"image_pulls_by_name",
	I0408 12:10:21.538938  403745 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0408 12:10:21.538947  403745 command_runner.go:130] > # 	"image_pulls_failures",
	I0408 12:10:21.538954  403745 command_runner.go:130] > # 	"image_pulls_successes",
	I0408 12:10:21.538958  403745 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0408 12:10:21.538964  403745 command_runner.go:130] > # 	"image_layer_reuse",
	I0408 12:10:21.538969  403745 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0408 12:10:21.538975  403745 command_runner.go:130] > # 	"containers_oom_total",
	I0408 12:10:21.538979  403745 command_runner.go:130] > # 	"containers_oom",
	I0408 12:10:21.538985  403745 command_runner.go:130] > # 	"processes_defunct",
	I0408 12:10:21.538995  403745 command_runner.go:130] > # 	"operations_total",
	I0408 12:10:21.539002  403745 command_runner.go:130] > # 	"operations_latency_seconds",
	I0408 12:10:21.539006  403745 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0408 12:10:21.539012  403745 command_runner.go:130] > # 	"operations_errors_total",
	I0408 12:10:21.539016  403745 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0408 12:10:21.539023  403745 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0408 12:10:21.539028  403745 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0408 12:10:21.539034  403745 command_runner.go:130] > # 	"image_pulls_success_total",
	I0408 12:10:21.539038  403745 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0408 12:10:21.539044  403745 command_runner.go:130] > # 	"containers_oom_count_total",
	I0408 12:10:21.539049  403745 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0408 12:10:21.539055  403745 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0408 12:10:21.539058  403745 command_runner.go:130] > # ]
	I0408 12:10:21.539063  403745 command_runner.go:130] > # The port on which the metrics server will listen.
	I0408 12:10:21.539069  403745 command_runner.go:130] > # metrics_port = 9090
	I0408 12:10:21.539074  403745 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0408 12:10:21.539080  403745 command_runner.go:130] > # metrics_socket = ""
	I0408 12:10:21.539085  403745 command_runner.go:130] > # The certificate for the secure metrics server.
	I0408 12:10:21.539092  403745 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0408 12:10:21.539100  403745 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0408 12:10:21.539107  403745 command_runner.go:130] > # certificate on any modification event.
	I0408 12:10:21.539110  403745 command_runner.go:130] > # metrics_cert = ""
	I0408 12:10:21.539115  403745 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0408 12:10:21.539122  403745 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0408 12:10:21.539126  403745 command_runner.go:130] > # metrics_key = ""
	I0408 12:10:21.539134  403745 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0408 12:10:21.539137  403745 command_runner.go:130] > [crio.tracing]
	I0408 12:10:21.539145  403745 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0408 12:10:21.539149  403745 command_runner.go:130] > # enable_tracing = false
	I0408 12:10:21.539157  403745 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0408 12:10:21.539164  403745 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0408 12:10:21.539171  403745 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0408 12:10:21.539178  403745 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0408 12:10:21.539182  403745 command_runner.go:130] > # CRI-O NRI configuration.
	I0408 12:10:21.539185  403745 command_runner.go:130] > [crio.nri]
	I0408 12:10:21.539190  403745 command_runner.go:130] > # Globally enable or disable NRI.
	I0408 12:10:21.539200  403745 command_runner.go:130] > # enable_nri = false
	I0408 12:10:21.539209  403745 command_runner.go:130] > # NRI socket to listen on.
	I0408 12:10:21.539214  403745 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0408 12:10:21.539218  403745 command_runner.go:130] > # NRI plugin directory to use.
	I0408 12:10:21.539225  403745 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0408 12:10:21.539230  403745 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0408 12:10:21.539236  403745 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0408 12:10:21.539242  403745 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0408 12:10:21.539249  403745 command_runner.go:130] > # nri_disable_connections = false
	I0408 12:10:21.539253  403745 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0408 12:10:21.539260  403745 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0408 12:10:21.539265  403745 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0408 12:10:21.539271  403745 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0408 12:10:21.539277  403745 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0408 12:10:21.539283  403745 command_runner.go:130] > [crio.stats]
	I0408 12:10:21.539288  403745 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0408 12:10:21.539293  403745 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0408 12:10:21.539297  403745 command_runner.go:130] > # stats_collection_period = 0
	I0408 12:10:21.539480  403745 cni.go:84] Creating CNI manager for ""
	I0408 12:10:21.539500  403745 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0408 12:10:21.539510  403745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:10:21.539536  403745 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-830937 NodeName:multinode-830937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:10:21.539705  403745 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-830937"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:10:21.539800  403745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:10:21.551615  403745 command_runner.go:130] > kubeadm
	I0408 12:10:21.551640  403745 command_runner.go:130] > kubectl
	I0408 12:10:21.551645  403745 command_runner.go:130] > kubelet
	I0408 12:10:21.551670  403745 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:10:21.551747  403745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:10:21.562567  403745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0408 12:10:21.580580  403745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:10:21.598456  403745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0408 12:10:21.617157  403745 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0408 12:10:21.621378  403745 command_runner.go:130] > 192.168.39.209	control-plane.minikube.internal
	I0408 12:10:21.621517  403745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:10:21.771888  403745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:10:21.788375  403745 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937 for IP: 192.168.39.209
	I0408 12:10:21.788412  403745 certs.go:194] generating shared ca certs ...
	I0408 12:10:21.788440  403745 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:10:21.788649  403745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:10:21.788703  403745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:10:21.788718  403745 certs.go:256] generating profile certs ...
	I0408 12:10:21.788881  403745 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/client.key
	I0408 12:10:21.788953  403745 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.key.e1ccdead
	I0408 12:10:21.788991  403745 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.key
	I0408 12:10:21.789013  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 12:10:21.789030  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 12:10:21.789049  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 12:10:21.789065  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 12:10:21.789083  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 12:10:21.789100  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 12:10:21.789120  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 12:10:21.789137  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 12:10:21.789254  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:10:21.789288  403745 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:10:21.789298  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:10:21.789319  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:10:21.789346  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:10:21.789374  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:10:21.789425  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:10:21.789499  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 12:10:21.789523  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 12:10:21.789536  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:21.790221  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:10:21.817450  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:10:21.844127  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:10:21.870601  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:10:21.896997  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 12:10:21.924343  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:10:21.951881  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:10:21.979743  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:10:22.006919  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:10:22.034161  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:10:22.060195  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:10:22.086227  403745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:10:22.104744  403745 ssh_runner.go:195] Run: openssl version
	I0408 12:10:22.112219  403745 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0408 12:10:22.112455  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:10:22.125306  403745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:22.130108  403745 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:22.130234  403745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:22.130295  403745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:22.136316  403745 command_runner.go:130] > b5213941
	I0408 12:10:22.136399  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:10:22.147046  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:10:22.159512  403745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:10:22.164584  403745 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:10:22.164625  403745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:10:22.164695  403745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:10:22.170757  403745 command_runner.go:130] > 51391683
	I0408 12:10:22.170908  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:10:22.182193  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:10:22.194932  403745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:10:22.200266  403745 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:10:22.200349  403745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:10:22.200428  403745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:10:22.206662  403745 command_runner.go:130] > 3ec20f2e
	I0408 12:10:22.206780  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:10:22.219241  403745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:10:22.224078  403745 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:10:22.224108  403745 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0408 12:10:22.224116  403745 command_runner.go:130] > Device: 253,1	Inode: 5245446     Links: 1
	I0408 12:10:22.224125  403745 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 12:10:22.224137  403745 command_runner.go:130] > Access: 2024-04-08 12:04:04.066089330 +0000
	I0408 12:10:22.224144  403745 command_runner.go:130] > Modify: 2024-04-08 12:04:04.066089330 +0000
	I0408 12:10:22.224149  403745 command_runner.go:130] > Change: 2024-04-08 12:04:04.066089330 +0000
	I0408 12:10:22.224157  403745 command_runner.go:130] >  Birth: 2024-04-08 12:04:04.066089330 +0000
	I0408 12:10:22.224246  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:10:22.230517  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.230714  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:10:22.236838  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.236912  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:10:22.242789  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.243057  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:10:22.248881  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.249070  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:10:22.254912  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.255004  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:10:22.261127  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.261250  403745 kubeadm.go:391] StartCluster: {Name:multinode-830937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-830937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:10:22.261405  403745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:10:22.261486  403745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:10:22.299598  403745 command_runner.go:130] > e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76
	I0408 12:10:22.299649  403745 command_runner.go:130] > 5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a
	I0408 12:10:22.299659  403745 command_runner.go:130] > 1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6
	I0408 12:10:22.299667  403745 command_runner.go:130] > da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7
	I0408 12:10:22.299676  403745 command_runner.go:130] > 284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b
	I0408 12:10:22.299694  403745 command_runner.go:130] > 7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b
	I0408 12:10:22.299704  403745 command_runner.go:130] > 7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb
	I0408 12:10:22.299727  403745 command_runner.go:130] > 7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031
	I0408 12:10:22.301159  403745 cri.go:89] found id: "e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76"
	I0408 12:10:22.301183  403745 cri.go:89] found id: "5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a"
	I0408 12:10:22.301189  403745 cri.go:89] found id: "1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6"
	I0408 12:10:22.301194  403745 cri.go:89] found id: "da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7"
	I0408 12:10:22.301198  403745 cri.go:89] found id: "284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b"
	I0408 12:10:22.301207  403745 cri.go:89] found id: "7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b"
	I0408 12:10:22.301211  403745 cri.go:89] found id: "7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb"
	I0408 12:10:22.301214  403745 cri.go:89] found id: "7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031"
	I0408 12:10:22.301218  403745 cri.go:89] found id: ""
	I0408 12:10:22.301282  403745 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.476353424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712578308476329426,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0afe5f5-1745-4567-86db-57cd31ba5292 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.477168791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d19d9d1-eb9e-48d5-a9b7-40919efb60dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.477254415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d19d9d1-eb9e-48d5-a9b7-40919efb60dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.477705154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba346d53555d55e94161e197a28f9d323b18afae027ef01ac96711f822fa1a8c,PodSandboxId:67bea1ad1e2cecd06b6b94089d28efd1d0369b01bf234343a55776911a2a092d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712578263112487534,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3,PodSandboxId:cb6242edc417aff85a9fb79f999da7a17e87e40f1b46388dcdc2aac3c7559dd4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712578229655325640,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256,PodSandboxId:98833232fc637f0840ce9b5b22ca5a58c91a6e691ebf5b3b7ffc5aeffc03d982,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712578229550629505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9e36fb14983c3b277c377038d02ab838dbc72c1157d2e7ca6e094f91ec80c,PodSandboxId:a624408700c2bc406bb43cedb68d2ccea0f0dc6468ec00290a5d567487ab78e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712578229454319577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},A
nnotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783,PodSandboxId:2767413e7d2991e908f6a64db4c823dd3f8972dd4fb8ba196bb66bd7fe92fac0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712578229420885577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a1b-dd394759e88e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f,PodSandboxId:dc93300983e1da070dd99196e98036ce96308458d8283d35f2e22a56dfe1ffa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712578224654214978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernetes.container.hash: d1246064,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7,PodSandboxId:175d44acb00d5a20a95f91b27e3385be0f547566b5204988dc3adbdd280c5766,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712578224624135108,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.container.hash: 2d255
7ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db,PodSandboxId:c2410c8bdb79b8a8ca16b5aa51c7b3c155f011be713493274c1a9ffb44cae134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712578224623108790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8,PodSandboxId:58221f688267581c6b1d329c806bb3484690fa70d25280107d97fb3da0b8d2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712578224532272269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cf7f781e2db2fd91f37195c867e456dbdfe6653bb762c385b9584de1d90207,PodSandboxId:b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712577916208457834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a,PodSandboxId:4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712577869216767226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76,PodSandboxId:71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712577869216893156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6,PodSandboxId:5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712577867736013137,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7,PodSandboxId:e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712577867533782283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a
1b-dd394759e88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b,PodSandboxId:e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712577848107907417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b,PodSandboxId:5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712577848069701306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernete
s.container.hash: d1246064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb,PodSandboxId:10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712577848067509935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.has
h: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031,PodSandboxId:f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712577848037136462,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d19d9d1-eb9e-48d5-a9b7-40919efb60dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.526551077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b256a414-fee1-4e4c-95e7-911a5ab8b667 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.526681451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b256a414-fee1-4e4c-95e7-911a5ab8b667 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.527716145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3794b7f7-645b-48e5-9182-29700ef51226 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.528141974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712578308528100939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3794b7f7-645b-48e5-9182-29700ef51226 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.528792831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0654ce24-6b77-4030-92d5-fd4a14cf8094 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.528869536Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0654ce24-6b77-4030-92d5-fd4a14cf8094 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.529241440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba346d53555d55e94161e197a28f9d323b18afae027ef01ac96711f822fa1a8c,PodSandboxId:67bea1ad1e2cecd06b6b94089d28efd1d0369b01bf234343a55776911a2a092d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712578263112487534,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3,PodSandboxId:cb6242edc417aff85a9fb79f999da7a17e87e40f1b46388dcdc2aac3c7559dd4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712578229655325640,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256,PodSandboxId:98833232fc637f0840ce9b5b22ca5a58c91a6e691ebf5b3b7ffc5aeffc03d982,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712578229550629505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9e36fb14983c3b277c377038d02ab838dbc72c1157d2e7ca6e094f91ec80c,PodSandboxId:a624408700c2bc406bb43cedb68d2ccea0f0dc6468ec00290a5d567487ab78e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712578229454319577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},A
nnotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783,PodSandboxId:2767413e7d2991e908f6a64db4c823dd3f8972dd4fb8ba196bb66bd7fe92fac0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712578229420885577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a1b-dd394759e88e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f,PodSandboxId:dc93300983e1da070dd99196e98036ce96308458d8283d35f2e22a56dfe1ffa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712578224654214978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernetes.container.hash: d1246064,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7,PodSandboxId:175d44acb00d5a20a95f91b27e3385be0f547566b5204988dc3adbdd280c5766,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712578224624135108,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.container.hash: 2d255
7ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db,PodSandboxId:c2410c8bdb79b8a8ca16b5aa51c7b3c155f011be713493274c1a9ffb44cae134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712578224623108790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8,PodSandboxId:58221f688267581c6b1d329c806bb3484690fa70d25280107d97fb3da0b8d2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712578224532272269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cf7f781e2db2fd91f37195c867e456dbdfe6653bb762c385b9584de1d90207,PodSandboxId:b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712577916208457834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a,PodSandboxId:4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712577869216767226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76,PodSandboxId:71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712577869216893156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6,PodSandboxId:5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712577867736013137,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7,PodSandboxId:e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712577867533782283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a
1b-dd394759e88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b,PodSandboxId:e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712577848107907417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b,PodSandboxId:5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712577848069701306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernete
s.container.hash: d1246064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb,PodSandboxId:10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712577848067509935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.has
h: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031,PodSandboxId:f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712577848037136462,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0654ce24-6b77-4030-92d5-fd4a14cf8094 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.578851010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea4c35f7-4e42-4c01-82a5-8e1015658fbd name=/runtime.v1.RuntimeService/Version
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.578951626Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea4c35f7-4e42-4c01-82a5-8e1015658fbd name=/runtime.v1.RuntimeService/Version
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.580531949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99142dd6-ef21-440f-972e-e04112a25a62 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.580979445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712578308580951388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99142dd6-ef21-440f-972e-e04112a25a62 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.581650758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83ea0035-fa46-40f2-9d5f-4d4bccda12c7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.581731756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83ea0035-fa46-40f2-9d5f-4d4bccda12c7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.582986961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba346d53555d55e94161e197a28f9d323b18afae027ef01ac96711f822fa1a8c,PodSandboxId:67bea1ad1e2cecd06b6b94089d28efd1d0369b01bf234343a55776911a2a092d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712578263112487534,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3,PodSandboxId:cb6242edc417aff85a9fb79f999da7a17e87e40f1b46388dcdc2aac3c7559dd4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712578229655325640,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256,PodSandboxId:98833232fc637f0840ce9b5b22ca5a58c91a6e691ebf5b3b7ffc5aeffc03d982,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712578229550629505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9e36fb14983c3b277c377038d02ab838dbc72c1157d2e7ca6e094f91ec80c,PodSandboxId:a624408700c2bc406bb43cedb68d2ccea0f0dc6468ec00290a5d567487ab78e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712578229454319577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},A
nnotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783,PodSandboxId:2767413e7d2991e908f6a64db4c823dd3f8972dd4fb8ba196bb66bd7fe92fac0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712578229420885577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a1b-dd394759e88e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f,PodSandboxId:dc93300983e1da070dd99196e98036ce96308458d8283d35f2e22a56dfe1ffa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712578224654214978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernetes.container.hash: d1246064,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7,PodSandboxId:175d44acb00d5a20a95f91b27e3385be0f547566b5204988dc3adbdd280c5766,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712578224624135108,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.container.hash: 2d255
7ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db,PodSandboxId:c2410c8bdb79b8a8ca16b5aa51c7b3c155f011be713493274c1a9ffb44cae134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712578224623108790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8,PodSandboxId:58221f688267581c6b1d329c806bb3484690fa70d25280107d97fb3da0b8d2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712578224532272269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cf7f781e2db2fd91f37195c867e456dbdfe6653bb762c385b9584de1d90207,PodSandboxId:b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712577916208457834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a,PodSandboxId:4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712577869216767226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76,PodSandboxId:71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712577869216893156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6,PodSandboxId:5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712577867736013137,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7,PodSandboxId:e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712577867533782283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a
1b-dd394759e88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b,PodSandboxId:e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712577848107907417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b,PodSandboxId:5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712577848069701306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernete
s.container.hash: d1246064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb,PodSandboxId:10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712577848067509935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.has
h: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031,PodSandboxId:f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712577848037136462,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83ea0035-fa46-40f2-9d5f-4d4bccda12c7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.631870710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a91527f-3354-435e-8b39-c791c348ddd0 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.632207620Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a91527f-3354-435e-8b39-c791c348ddd0 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.633407888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33026919-6c0a-4b7a-bc0f-cee20c57045c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.634163615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712578308634123596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33026919-6c0a-4b7a-bc0f-cee20c57045c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.634684345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a3c4708-0f31-4d9c-9695-7df3ffc60e48 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.634740999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a3c4708-0f31-4d9c-9695-7df3ffc60e48 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:11:48 multinode-830937 crio[2834]: time="2024-04-08 12:11:48.635087554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba346d53555d55e94161e197a28f9d323b18afae027ef01ac96711f822fa1a8c,PodSandboxId:67bea1ad1e2cecd06b6b94089d28efd1d0369b01bf234343a55776911a2a092d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712578263112487534,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3,PodSandboxId:cb6242edc417aff85a9fb79f999da7a17e87e40f1b46388dcdc2aac3c7559dd4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712578229655325640,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256,PodSandboxId:98833232fc637f0840ce9b5b22ca5a58c91a6e691ebf5b3b7ffc5aeffc03d982,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712578229550629505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9e36fb14983c3b277c377038d02ab838dbc72c1157d2e7ca6e094f91ec80c,PodSandboxId:a624408700c2bc406bb43cedb68d2ccea0f0dc6468ec00290a5d567487ab78e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712578229454319577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},A
nnotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783,PodSandboxId:2767413e7d2991e908f6a64db4c823dd3f8972dd4fb8ba196bb66bd7fe92fac0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712578229420885577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a1b-dd394759e88e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f,PodSandboxId:dc93300983e1da070dd99196e98036ce96308458d8283d35f2e22a56dfe1ffa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712578224654214978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernetes.container.hash: d1246064,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7,PodSandboxId:175d44acb00d5a20a95f91b27e3385be0f547566b5204988dc3adbdd280c5766,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712578224624135108,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.container.hash: 2d255
7ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db,PodSandboxId:c2410c8bdb79b8a8ca16b5aa51c7b3c155f011be713493274c1a9ffb44cae134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712578224623108790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8,PodSandboxId:58221f688267581c6b1d329c806bb3484690fa70d25280107d97fb3da0b8d2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712578224532272269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cf7f781e2db2fd91f37195c867e456dbdfe6653bb762c385b9584de1d90207,PodSandboxId:b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712577916208457834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a,PodSandboxId:4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712577869216767226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76,PodSandboxId:71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712577869216893156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6,PodSandboxId:5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712577867736013137,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7,PodSandboxId:e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712577867533782283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a
1b-dd394759e88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b,PodSandboxId:e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712577848107907417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b,PodSandboxId:5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712577848069701306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernete
s.container.hash: d1246064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb,PodSandboxId:10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712577848067509935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.has
h: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031,PodSandboxId:f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712577848037136462,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a3c4708-0f31-4d9c-9695-7df3ffc60e48 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ba346d53555d5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      45 seconds ago       Running             busybox                   1                   67bea1ad1e2ce       busybox-7fdf7869d9-jn6pk
	ce62d426a1abb       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   cb6242edc417a       kindnet-pshn8
	6b3f97a10dbec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   98833232fc637       coredns-76f75df574-5fk5c
	83f9e36fb1498       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a624408700c2b       storage-provisioner
	82bbbff4ed64b       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Running             kube-proxy                1                   2767413e7d299       kube-proxy-qm6vx
	992f433a96797       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   dc93300983e1d       etcd-multinode-830937
	9c67d2db5cd71       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   1                   175d44acb00d5       kube-controller-manager-multinode-830937
	5ac245d28d5e6       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            1                   c2410c8bdb79b       kube-apiserver-multinode-830937
	beccf095cd84c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Running             kube-scheduler            1                   58221f6882675       kube-scheduler-multinode-830937
	c0cf7f781e2db       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   b093d1301485c       busybox-7fdf7869d9-jn6pk
	e44b4f6b6a25e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   71990dafa93e1       coredns-76f75df574-5fk5c
	5bb59aff7adac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   4d43c58410aee       storage-provisioner
	1e04ca573f33a       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   5a343ad7d660c       kindnet-pshn8
	da9349d66fe24       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago        Exited              kube-proxy                0                   e48f38f25b4aa       kube-proxy-qm6vx
	284273d5afb07       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago        Exited              kube-scheduler            0                   e033ff197814a       kube-scheduler-multinode-830937
	7ed59a2a6bedc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   5bd6b29159a09       etcd-multinode-830937
	7e303f2a50cf0       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago        Exited              kube-apiserver            0                   10c0ab27d62a2       kube-apiserver-multinode-830937
	7e5f832d63815       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago        Exited              kube-controller-manager   0                   f9f7d9b0fcca8       kube-controller-manager-multinode-830937
	
	
	==> coredns [6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50421 - 389 "HINFO IN 1154849171857597706.7063981779794316596. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007401745s
	
	
	==> coredns [e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76] <==
	[INFO] 10.244.0.3:36577 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001786511s
	[INFO] 10.244.0.3:40240 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052431s
	[INFO] 10.244.0.3:35015 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045612s
	[INFO] 10.244.0.3:35469 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001334288s
	[INFO] 10.244.0.3:56058 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081303s
	[INFO] 10.244.0.3:47690 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049309s
	[INFO] 10.244.0.3:41173 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049722s
	[INFO] 10.244.1.2:57122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000306719s
	[INFO] 10.244.1.2:52769 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162088s
	[INFO] 10.244.1.2:37533 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093994s
	[INFO] 10.244.1.2:39965 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075724s
	[INFO] 10.244.0.3:54441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069189s
	[INFO] 10.244.0.3:57561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051756s
	[INFO] 10.244.0.3:42008 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000047009s
	[INFO] 10.244.0.3:56901 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045897s
	[INFO] 10.244.1.2:56321 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140528s
	[INFO] 10.244.1.2:45879 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014176s
	[INFO] 10.244.1.2:35317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134061s
	[INFO] 10.244.1.2:39462 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121502s
	[INFO] 10.244.0.3:60830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080915s
	[INFO] 10.244.0.3:37696 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000037257s
	[INFO] 10.244.0.3:42774 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116028s
	[INFO] 10.244.0.3:57023 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000047469s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-830937
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-830937
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=multinode-830937
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T12_04_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:04:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-830937
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 12:11:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 12:10:28 +0000   Mon, 08 Apr 2024 12:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 12:10:28 +0000   Mon, 08 Apr 2024 12:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 12:10:28 +0000   Mon, 08 Apr 2024 12:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 12:10:28 +0000   Mon, 08 Apr 2024 12:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    multinode-830937
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b40595c8482648e0ac686434f4f4a9a5
	  System UUID:                b40595c8-4826-48e0-ac68-6434f4f4a9a5
	  Boot ID:                    367f8949-d58b-4d28-9f83-ad221b18208d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jn6pk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 coredns-76f75df574-5fk5c                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m21s
	  kube-system                 etcd-multinode-830937                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m34s
	  kube-system                 kindnet-pshn8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m22s
	  kube-system                 kube-apiserver-multinode-830937             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-controller-manager-multinode-830937    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 kube-proxy-qm6vx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-scheduler-multinode-830937             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m21s              kube-proxy       
	  Normal  Starting                 79s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m34s              kubelet          Node multinode-830937 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m34s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m34s              kubelet          Node multinode-830937 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s              kubelet          Node multinode-830937 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m34s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m23s              node-controller  Node multinode-830937 event: Registered Node multinode-830937 in Controller
	  Normal  NodeReady                7m20s              kubelet          Node multinode-830937 status is now: NodeReady
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node multinode-830937 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node multinode-830937 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)  kubelet          Node multinode-830937 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                node-controller  Node multinode-830937 event: Registered Node multinode-830937 in Controller
	
	
	Name:               multinode-830937-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-830937-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=multinode-830937
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T12_11_09_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:11:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-830937-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 12:11:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 12:11:38 +0000   Mon, 08 Apr 2024 12:11:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 12:11:38 +0000   Mon, 08 Apr 2024 12:11:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 12:11:38 +0000   Mon, 08 Apr 2024 12:11:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 12:11:38 +0000   Mon, 08 Apr 2024 12:11:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    multinode-830937-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f602a83f5d244d8a54a893c287d5654
	  System UUID:                6f602a83-f5d2-44d8-a54a-893c287d5654
	  Boot ID:                    39df4e58-c417-4dec-88eb-ccc5c5d887a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2pf6r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-9pdws               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m47s
	  kube-system                 kube-proxy-rhzzl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m42s                  kube-proxy  
	  Normal  Starting                 37s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m47s (x3 over 6m48s)  kubelet     Node multinode-830937-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m47s (x3 over 6m48s)  kubelet     Node multinode-830937-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m47s (x3 over 6m48s)  kubelet     Node multinode-830937-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m47s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m38s                  kubelet     Node multinode-830937-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)      kubelet     Node multinode-830937-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)      kubelet     Node multinode-830937-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)      kubelet     Node multinode-830937-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                32s                    kubelet     Node multinode-830937-m02 status is now: NodeReady
	
	
	Name:               multinode-830937-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-830937-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=multinode-830937
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T12_11_37_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:11:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-830937-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 12:11:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 12:11:45 +0000   Mon, 08 Apr 2024 12:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 12:11:45 +0000   Mon, 08 Apr 2024 12:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 12:11:45 +0000   Mon, 08 Apr 2024 12:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 12:11:45 +0000   Mon, 08 Apr 2024 12:11:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    multinode-830937-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e443840395f14d428beb5986e8bf5717
	  System UUID:                e4438403-95f1-4d42-8beb-5986e8bf5717
	  Boot ID:                    4c96c213-5a5b-4f66-a9a4-72f98d087799
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cd659       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m59s
	  kube-system                 kube-proxy-25r2l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  Starting                 7s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m (x2 over 6m)        kubelet          Node multinode-830937-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x2 over 6m)        kubelet          Node multinode-830937-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x2 over 6m)        kubelet          Node multinode-830937-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m49s                  kubelet          Node multinode-830937-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m17s (x2 over 5m17s)  kubelet          Node multinode-830937-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s (x2 over 5m17s)  kubelet          Node multinode-830937-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m17s (x2 over 5m17s)  kubelet          Node multinode-830937-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m8s                   kubelet          Node multinode-830937-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet          Node multinode-830937-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet          Node multinode-830937-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet          Node multinode-830937-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node multinode-830937-m03 event: Registered Node multinode-830937-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-830937-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055861] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052867] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.175089] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.145009] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.282261] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[Apr 8 12:04] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.062538] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.764834] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.632545] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.700045] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.077752] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.099639] systemd-fstab-generator[1467]: Ignoring "noauto" option for root device
	[  +0.129495] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 8 12:05] kauditd_printk_skb: 82 callbacks suppressed
	[Apr 8 12:10] systemd-fstab-generator[2752]: Ignoring "noauto" option for root device
	[  +0.144589] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.181270] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.141141] systemd-fstab-generator[2791]: Ignoring "noauto" option for root device
	[  +0.303184] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.768801] systemd-fstab-generator[2917]: Ignoring "noauto" option for root device
	[  +1.936846] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +5.702177] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.020455] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.799258] systemd-fstab-generator[3864]: Ignoring "noauto" option for root device
	[Apr 8 12:11] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b] <==
	{"level":"info","ts":"2024-04-08T12:04:09.370779Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:04:09.370818Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T12:04:09.372529Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T12:04:09.376091Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:04:09.376202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:04:09.381679Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:04:09.390696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-04-08T12:05:01.025263Z","caller":"traceutil/trace.go:171","msg":"trace[406954701] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"245.840194ms","start":"2024-04-08T12:05:00.779389Z","end":"2024-04-08T12:05:01.025229Z","steps":["trace[406954701] 'process raft request'  (duration: 239.933846ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:05:50.179674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.799499ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6276767450623414734 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-830937-m03.17c44c9262309686\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-830937-m03.17c44c9262309686\" value_size:642 lease:6276767450623414531 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-08T12:05:50.179978Z","caller":"traceutil/trace.go:171","msg":"trace[1481561105] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"194.282408ms","start":"2024-04-08T12:05:49.985664Z","end":"2024-04-08T12:05:50.179947Z","steps":["trace[1481561105] 'process raft request'  (duration: 194.221264ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:05:50.180083Z","caller":"traceutil/trace.go:171","msg":"trace[1307218404] linearizableReadLoop","detail":"{readStateIndex:592; appliedIndex:591; }","duration":"242.423286ms","start":"2024-04-08T12:05:49.937645Z","end":"2024-04-08T12:05:50.180068Z","steps":["trace[1307218404] 'read index received'  (duration: 79.516699ms)","trace[1307218404] 'applied index is now lower than readState.Index'  (duration: 162.905014ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-08T12:05:50.180196Z","caller":"traceutil/trace.go:171","msg":"trace[618306446] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"245.257206ms","start":"2024-04-08T12:05:49.934931Z","end":"2024-04-08T12:05:50.180188Z","steps":["trace[618306446] 'process raft request'  (duration: 82.2215ms)","trace[618306446] 'compare'  (duration: 161.691585ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T12:05:50.180546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.895344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-08T12:05:50.182298Z","caller":"traceutil/trace.go:171","msg":"trace[1089261321] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:559; }","duration":"244.710557ms","start":"2024-04-08T12:05:49.937567Z","end":"2024-04-08T12:05:50.182277Z","steps":["trace[1089261321] 'agreement among raft nodes before linearized reading'  (duration: 242.957103ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:05:54.161383Z","caller":"traceutil/trace.go:171","msg":"trace[1465123816] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"188.898528ms","start":"2024-04-08T12:05:53.972415Z","end":"2024-04-08T12:05:54.161313Z","steps":["trace[1465123816] 'process raft request'  (duration: 188.733343ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:08:48.794431Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-08T12:08:48.794564Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-830937","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2024-04-08T12:08:48.794705Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-08T12:08:48.794812Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-08T12:08:48.883902Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-08T12:08:48.884036Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-08T12:08:48.884093Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2024-04-08T12:08:48.887562Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-04-08T12:08:48.887803Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-04-08T12:08:48.887816Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-830937","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	
	==> etcd [992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f] <==
	{"level":"info","ts":"2024-04-08T12:10:25.417093Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-08T12:10:25.417105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-08T12:10:25.417353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b switched to configuration voters=(8441320971333687067)"}
	{"level":"info","ts":"2024-04-08T12:10:25.417438Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2024-04-08T12:10:25.417555Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:10:25.417647Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:10:25.435108Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T12:10:25.436123Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-08T12:10:25.436366Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T12:10:25.440715Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-04-08T12:10:25.440759Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-04-08T12:10:27.24721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-08T12:10:27.247253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-08T12:10:27.24729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2024-04-08T12:10:27.247303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2024-04-08T12:10:27.247309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-04-08T12:10:27.247317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2024-04-08T12:10:27.247327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-04-08T12:10:27.256289Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:multinode-830937 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T12:10:27.25629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:10:27.256466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:10:27.256482Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:10:27.256899Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T12:10:27.25842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-04-08T12:10:27.258679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:11:49 up 8 min,  0 users,  load average: 0.54, 0.23, 0.12
	Linux multinode-830937 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6] <==
	I0408 12:08:08.730272       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:08:18.743541       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:08:18.743708       1 main.go:227] handling current node
	I0408 12:08:18.743735       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:08:18.743755       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:08:18.744020       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:08:18.744103       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:08:28.757556       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:08:28.757945       1 main.go:227] handling current node
	I0408 12:08:28.758036       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:08:28.758065       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:08:28.758193       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:08:28.758214       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:08:38.764043       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:08:38.764090       1 main.go:227] handling current node
	I0408 12:08:38.764101       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:08:38.764107       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:08:38.764210       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:08:38.764215       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:08:48.779223       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:08:48.779305       1 main.go:227] handling current node
	I0408 12:08:48.779316       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:08:48.779323       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:08:48.779654       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:08:48.779684       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3] <==
	I0408 12:11:00.528251       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:11:10.535186       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:11:10.535231       1 main.go:227] handling current node
	I0408 12:11:10.535243       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:11:10.535249       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:11:10.535457       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:11:10.535493       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:11:20.541406       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:11:20.541545       1 main.go:227] handling current node
	I0408 12:11:20.541635       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:11:20.541647       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:11:20.541832       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:11:20.541872       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:11:30.599669       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:11:30.599718       1 main.go:227] handling current node
	I0408 12:11:30.599733       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:11:30.599741       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:11:30.599917       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:11:30.599957       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:11:40.615012       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:11:40.615092       1 main.go:227] handling current node
	I0408 12:11:40.615117       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:11:40.615124       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:11:40.615282       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:11:40.615316       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db] <==
	I0408 12:10:28.621103       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0408 12:10:28.621113       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0408 12:10:28.621148       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0408 12:10:28.673125       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0408 12:10:28.679835       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0408 12:10:28.680396       1 shared_informer.go:318] Caches are synced for configmaps
	I0408 12:10:28.680881       1 aggregator.go:165] initial CRD sync complete...
	I0408 12:10:28.680917       1 autoregister_controller.go:141] Starting autoregister controller
	I0408 12:10:28.680940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 12:10:28.680963       1 cache.go:39] Caches are synced for autoregister controller
	I0408 12:10:28.698340       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0408 12:10:28.698412       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0408 12:10:28.698424       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0408 12:10:28.740199       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 12:10:28.760900       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0408 12:10:28.769161       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0408 12:10:28.787196       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0408 12:10:29.598812       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 12:10:30.926219       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0408 12:10:31.054361       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0408 12:10:31.071954       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0408 12:10:31.145477       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 12:10:31.153103       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 12:10:41.007992       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 12:10:41.303221       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb] <==
	I0408 12:04:10.998490       1 cache.go:39] Caches are synced for autoregister controller
	I0408 12:04:11.781219       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0408 12:04:11.790190       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0408 12:04:11.790262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 12:04:12.625360       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 12:04:12.676522       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 12:04:12.798093       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0408 12:04:12.805217       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.209]
	I0408 12:04:12.806304       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 12:04:12.813685       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 12:04:12.839458       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0408 12:04:14.255217       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0408 12:04:14.270653       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0408 12:04:14.291968       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0408 12:04:26.299550       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0408 12:04:26.406012       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0408 12:08:48.792243       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0408 12:08:48.805999       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0408 12:08:48.807559       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.810957       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.811040       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.811139       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.811160       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.811320       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.817883       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031] <==
	I0408 12:05:16.936862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.840326ms"
	I0408 12:05:16.936966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.353µs"
	I0408 12:05:50.181677       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-830937-m03\" does not exist"
	I0408 12:05:50.183953       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:05:50.198464       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-830937-m03" podCIDRs=["10.244.2.0/24"]
	I0408 12:05:50.233120       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-25r2l"
	I0408 12:05:50.235468       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cd659"
	I0408 12:05:50.891335       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-830937-m03"
	I0408 12:05:50.891461       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-830937-m03 event: Registered Node multinode-830937-m03 in Controller"
	I0408 12:06:00.684986       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:06:31.787335       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:06:32.912254       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-830937-m03\" does not exist"
	I0408 12:06:32.913953       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:06:32.940724       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-830937-m03" podCIDRs=["10.244.3.0/24"]
	I0408 12:06:42.018840       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:07:25.948386       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:07:25.950299       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-830937-m03 status is now: NodeNotReady"
	I0408 12:07:25.963277       1 event.go:376] "Event occurred" object="multinode-830937-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-830937-m02 status is now: NodeNotReady"
	I0408 12:07:25.977126       1 event.go:376] "Event occurred" object="kube-system/kindnet-cd659" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:25.988673       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-rhzzl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:25.993398       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-25r2l" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:26.008109       1 event.go:376] "Event occurred" object="kube-system/kindnet-9pdws" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:26.020124       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-522p8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:26.028847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.424434ms"
	I0408 12:07:26.029715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="53.224µs"
	
	
	==> kube-controller-manager [9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7] <==
	I0408 12:11:03.656193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="38.471µs"
	I0408 12:11:03.669798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.06827ms"
	I0408 12:11:03.669898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="50.882µs"
	I0408 12:11:07.008459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.9µs"
	I0408 12:11:07.980209       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-830937-m02\" does not exist"
	I0408 12:11:07.981298       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-522p8" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-522p8"
	I0408 12:11:07.998431       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-830937-m02" podCIDRs=["10.244.1.0/24"]
	I0408 12:11:08.894973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="58.014µs"
	I0408 12:11:08.905179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="69.596µs"
	I0408 12:11:08.931944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.37µs"
	I0408 12:11:08.944238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="109.987µs"
	I0408 12:11:08.949070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="49.329µs"
	I0408 12:11:16.085832       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:11:16.108278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="84.385µs"
	I0408 12:11:16.126157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="80.644µs"
	I0408 12:11:19.084066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="10.155919ms"
	I0408 12:11:19.085143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="136.688µs"
	I0408 12:11:21.014763       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2pf6r" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-2pf6r"
	I0408 12:11:35.170554       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:11:36.018889       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-830937-m03 event: Removing Node multinode-830937-m03 from Controller"
	I0408 12:11:36.409667       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:11:36.410207       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-830937-m03\" does not exist"
	I0408 12:11:36.435050       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-830937-m03" podCIDRs=["10.244.2.0/24"]
	I0408 12:11:41.019675       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-830937-m03 event: Registered Node multinode-830937-m03 in Controller"
	I0408 12:11:45.352936       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	
	
	==> kube-proxy [82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783] <==
	I0408 12:10:29.757144       1 server_others.go:72] "Using iptables proxy"
	I0408 12:10:29.797353       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0408 12:10:29.924888       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 12:10:29.924939       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:10:29.924959       1 server_others.go:168] "Using iptables Proxier"
	I0408 12:10:29.928862       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:10:29.929230       1 server.go:865] "Version info" version="v1.29.3"
	I0408 12:10:29.929431       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:10:29.931564       1 config.go:188] "Starting service config controller"
	I0408 12:10:29.931709       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 12:10:29.931761       1 config.go:97] "Starting endpoint slice config controller"
	I0408 12:10:29.931786       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 12:10:29.931978       1 config.go:315] "Starting node config controller"
	I0408 12:10:29.932016       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 12:10:30.032696       1 shared_informer.go:318] Caches are synced for node config
	I0408 12:10:30.032723       1 shared_informer.go:318] Caches are synced for service config
	I0408 12:10:30.032752       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7] <==
	I0408 12:04:27.657061       1 server_others.go:72] "Using iptables proxy"
	I0408 12:04:27.670018       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0408 12:04:27.717137       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 12:04:27.717180       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:04:27.717255       1 server_others.go:168] "Using iptables Proxier"
	I0408 12:04:27.720403       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:04:27.720860       1 server.go:865] "Version info" version="v1.29.3"
	I0408 12:04:27.720881       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:04:27.721838       1 config.go:188] "Starting service config controller"
	I0408 12:04:27.721885       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 12:04:27.721910       1 config.go:97] "Starting endpoint slice config controller"
	I0408 12:04:27.721914       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 12:04:27.722451       1 config.go:315] "Starting node config controller"
	I0408 12:04:27.722458       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 12:04:27.822986       1 shared_informer.go:318] Caches are synced for node config
	I0408 12:04:27.823041       1 shared_informer.go:318] Caches are synced for service config
	I0408 12:04:27.823087       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b] <==
	W0408 12:04:10.959517       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 12:04:10.959735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 12:04:11.784574       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 12:04:11.784677       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:04:11.891130       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 12:04:11.891268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 12:04:11.953668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 12:04:11.953735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 12:04:11.954265       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 12:04:11.954337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 12:04:12.093001       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 12:04:12.093031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 12:04:12.120996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 12:04:12.121043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 12:04:12.149100       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 12:04:12.149173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 12:04:12.202833       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 12:04:12.202894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 12:04:12.264464       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 12:04:12.264496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 12:04:12.270180       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 12:04:12.270266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 12:04:12.372126       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 12:04:12.372270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0408 12:04:14.934437       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8] <==
	I0408 12:10:25.952262       1 serving.go:380] Generated self-signed cert in-memory
	W0408 12:10:28.633340       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0408 12:10:28.635680       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:10:28.635809       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 12:10:28.635840       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 12:10:28.672666       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0408 12:10:28.672899       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:10:28.680783       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0408 12:10:28.680830       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 12:10:28.683522       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0408 12:10:28.683641       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0408 12:10:28.781240       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.847488    3049 topology_manager.go:215] "Topology Admit Handler" podUID="6a7258d8-d40d-4304-88ff-dfd2acc388e2" podNamespace="kube-system" podName="coredns-76f75df574-5fk5c"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.847625    3049 topology_manager.go:215] "Topology Admit Handler" podUID="a66019d1-fd63-4dd8-8954-c279352fbd0b" podNamespace="kube-system" podName="storage-provisioner"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.847748    3049 topology_manager.go:215] "Topology Admit Handler" podUID="384a9c78-7509-45b9-9491-3cff7c3ee650" podNamespace="default" podName="busybox-7fdf7869d9-jn6pk"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.856992    3049 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.933336    3049 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a66019d1-fd63-4dd8-8954-c279352fbd0b-tmp\") pod \"storage-provisioner\" (UID: \"a66019d1-fd63-4dd8-8954-c279352fbd0b\") " pod="kube-system/storage-provisioner"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.933722    3049 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/feeec3e8-e596-4675-9a1b-dd394759e88e-lib-modules\") pod \"kube-proxy-qm6vx\" (UID: \"feeec3e8-e596-4675-9a1b-dd394759e88e\") " pod="kube-system/kube-proxy-qm6vx"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.934712    3049 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe500d50-29e0-48c7-8a7d-c1d7885d7293-lib-modules\") pod \"kindnet-pshn8\" (UID: \"fe500d50-29e0-48c7-8a7d-c1d7885d7293\") " pod="kube-system/kindnet-pshn8"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.935010    3049 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/feeec3e8-e596-4675-9a1b-dd394759e88e-xtables-lock\") pod \"kube-proxy-qm6vx\" (UID: \"feeec3e8-e596-4675-9a1b-dd394759e88e\") " pod="kube-system/kube-proxy-qm6vx"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.935443    3049 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe500d50-29e0-48c7-8a7d-c1d7885d7293-xtables-lock\") pod \"kindnet-pshn8\" (UID: \"fe500d50-29e0-48c7-8a7d-c1d7885d7293\") " pod="kube-system/kindnet-pshn8"
	Apr 08 12:10:28 multinode-830937 kubelet[3049]: I0408 12:10:28.936508    3049 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fe500d50-29e0-48c7-8a7d-c1d7885d7293-cni-cfg\") pod \"kindnet-pshn8\" (UID: \"fe500d50-29e0-48c7-8a7d-c1d7885d7293\") " pod="kube-system/kindnet-pshn8"
	Apr 08 12:10:37 multinode-830937 kubelet[3049]: I0408 12:10:37.444153    3049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.917927    3049 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 12:11:23 multinode-830937 kubelet[3049]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 12:11:23 multinode-830937 kubelet[3049]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 12:11:23 multinode-830937 kubelet[3049]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 12:11:23 multinode-830937 kubelet[3049]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.965123    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podfeeec3e8-e596-4675-9a1b-dd394759e88e/crio-e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364: Error finding container e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364: Status 404 returned error can't find the container with id e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.965681    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod384a9c78-7509-45b9-9491-3cff7c3ee650/crio-b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2: Error finding container b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2: Status 404 returned error can't find the container with id b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.966147    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6a7258d8-d40d-4304-88ff-dfd2acc388e2/crio-71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55: Error finding container 71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55: Status 404 returned error can't find the container with id 71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.966457    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6b801b753040f40fcf4d08dd3bf64142/crio-5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a: Error finding container 5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a: Status 404 returned error can't find the container with id 5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.966796    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda5b79de58a1de000fdb766c8c2ded58a/crio-e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab: Error finding container e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab: Status 404 returned error can't find the container with id e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.967210    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod98343e58f0d1b18f1fef2476b3eb21d6/crio-10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965: Error finding container 10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965: Status 404 returned error can't find the container with id 10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.967439    3049 manager.go:1116] Failed to create existing container: /kubepods/podfe500d50-29e0-48c7-8a7d-c1d7885d7293/crio-5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003: Error finding container 5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003: Status 404 returned error can't find the container with id 5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.967904    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda66019d1-fd63-4dd8-8954-c279352fbd0b/crio-4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c: Error finding container 4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c: Status 404 returned error can't find the container with id 4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c
	Apr 08 12:11:23 multinode-830937 kubelet[3049]: E0408 12:11:23.968292    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb1c18b37da34361164ff4a42a164cf28/crio-f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578: Error finding container f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578: Status 404 returned error can't find the container with id f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:11:48.131425  404787 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18588-368424/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-830937 -n multinode-830937
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-830937 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (305.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 stop
E0408 12:13:06.832777  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 12:13:44.543895  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-830937 stop: exit status 82 (2m0.514504338s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-830937-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-830937 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-830937 status: exit status 3 (18.815716607s)

                                                
                                                
-- stdout --
	multinode-830937
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-830937-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:14:11.924125  405457 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host
	E0408 12:14:11.924170  405457 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.128:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-830937 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-830937 -n multinode-830937
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-830937 logs -n 25: (1.606981045s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m02:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937:/home/docker/cp-test_multinode-830937-m02_multinode-830937.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n multinode-830937 sudo cat                                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_multinode-830937-m02_multinode-830937.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m02:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03:/home/docker/cp-test_multinode-830937-m02_multinode-830937-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n multinode-830937-m03 sudo cat                                   | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_multinode-830937-m02_multinode-830937-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp testdata/cp-test.txt                                                | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1863887303/001/cp-test_multinode-830937-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937:/home/docker/cp-test_multinode-830937-m03_multinode-830937.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n multinode-830937 sudo cat                                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_multinode-830937-m03_multinode-830937.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt                       | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m02:/home/docker/cp-test_multinode-830937-m03_multinode-830937-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n                                                                 | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | multinode-830937-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-830937 ssh -n multinode-830937-m02 sudo cat                                   | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | /home/docker/cp-test_multinode-830937-m03_multinode-830937-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-830937 node stop m03                                                          | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	| node    | multinode-830937 node start                                                             | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC | 08 Apr 24 12:06 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-830937                                                                | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC |                     |
	| stop    | -p multinode-830937                                                                     | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:06 UTC |                     |
	| start   | -p multinode-830937                                                                     | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:08 UTC | 08 Apr 24 12:11 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-830937                                                                | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:11 UTC |                     |
	| node    | multinode-830937 node delete                                                            | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:11 UTC | 08 Apr 24 12:11 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-830937 stop                                                                   | multinode-830937 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:11 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:08:47
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:08:47.725583  403745 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:08:47.726223  403745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:08:47.726245  403745 out.go:304] Setting ErrFile to fd 2...
	I0408 12:08:47.726253  403745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:08:47.726731  403745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:08:47.727715  403745 out.go:298] Setting JSON to false
	I0408 12:08:47.728711  403745 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6671,"bootTime":1712571457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:08:47.728785  403745 start.go:139] virtualization: kvm guest
	I0408 12:08:47.731451  403745 out.go:177] * [multinode-830937] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:08:47.733071  403745 notify.go:220] Checking for updates...
	I0408 12:08:47.733085  403745 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:08:47.734820  403745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:08:47.736276  403745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:08:47.737446  403745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:08:47.738791  403745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:08:47.740002  403745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:08:47.741607  403745 config.go:182] Loaded profile config "multinode-830937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:08:47.741730  403745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:08:47.742372  403745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:08:47.742435  403745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:08:47.757790  403745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42319
	I0408 12:08:47.758348  403745 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:08:47.759004  403745 main.go:141] libmachine: Using API Version  1
	I0408 12:08:47.759027  403745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:08:47.759419  403745 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:08:47.759670  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:08:47.796833  403745 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:08:47.798143  403745 start.go:297] selected driver: kvm2
	I0408 12:08:47.798163  403745 start.go:901] validating driver "kvm2" against &{Name:multinode-830937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-830937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:08:47.798302  403745 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:08:47.798757  403745 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:08:47.798862  403745 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:08:47.815055  403745 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:08:47.815802  403745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:08:47.815881  403745 cni.go:84] Creating CNI manager for ""
	I0408 12:08:47.815897  403745 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0408 12:08:47.815983  403745 start.go:340] cluster config:
	{Name:multinode-830937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-830937 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:08:47.816118  403745 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:08:47.818011  403745 out.go:177] * Starting "multinode-830937" primary control-plane node in "multinode-830937" cluster
	I0408 12:08:47.819283  403745 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:08:47.819341  403745 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 12:08:47.819352  403745 cache.go:56] Caching tarball of preloaded images
	I0408 12:08:47.819440  403745 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:08:47.819452  403745 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 12:08:47.819581  403745 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/config.json ...
	I0408 12:08:47.819856  403745 start.go:360] acquireMachinesLock for multinode-830937: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:08:47.819920  403745 start.go:364] duration metric: took 29.457µs to acquireMachinesLock for "multinode-830937"
	I0408 12:08:47.819935  403745 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:08:47.819943  403745 fix.go:54] fixHost starting: 
	I0408 12:08:47.820194  403745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:08:47.820226  403745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:08:47.835866  403745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0408 12:08:47.836414  403745 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:08:47.836943  403745 main.go:141] libmachine: Using API Version  1
	I0408 12:08:47.836977  403745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:08:47.837413  403745 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:08:47.837674  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:08:47.837874  403745 main.go:141] libmachine: (multinode-830937) Calling .GetState
	I0408 12:08:47.839814  403745 fix.go:112] recreateIfNeeded on multinode-830937: state=Running err=<nil>
	W0408 12:08:47.839841  403745 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:08:47.842580  403745 out.go:177] * Updating the running kvm2 "multinode-830937" VM ...
	I0408 12:08:47.844375  403745 machine.go:94] provisionDockerMachine start ...
	I0408 12:08:47.844421  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:08:47.844803  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:47.847538  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:47.848089  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:47.848123  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:47.848416  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:47.848610  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:47.848792  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:47.848980  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:47.849216  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:08:47.849477  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:08:47.849495  403745 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:08:47.965312  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-830937
	
	I0408 12:08:47.965349  403745 main.go:141] libmachine: (multinode-830937) Calling .GetMachineName
	I0408 12:08:47.965617  403745 buildroot.go:166] provisioning hostname "multinode-830937"
	I0408 12:08:47.965653  403745 main.go:141] libmachine: (multinode-830937) Calling .GetMachineName
	I0408 12:08:47.965895  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:47.968381  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:47.968959  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:47.968994  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:47.969117  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:47.969335  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:47.969506  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:47.969668  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:47.969870  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:08:47.970060  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:08:47.970073  403745 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-830937 && echo "multinode-830937" | sudo tee /etc/hostname
	I0408 12:08:48.102952  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-830937
	
	I0408 12:08:48.102996  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:48.105974  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.106406  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.106446  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.106667  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:48.106875  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.107032  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.107157  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:48.107308  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:08:48.107534  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:08:48.107560  403745 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-830937' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-830937/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-830937' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:08:48.221115  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:08:48.221150  403745 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:08:48.221169  403745 buildroot.go:174] setting up certificates
	I0408 12:08:48.221181  403745 provision.go:84] configureAuth start
	I0408 12:08:48.221189  403745 main.go:141] libmachine: (multinode-830937) Calling .GetMachineName
	I0408 12:08:48.221543  403745 main.go:141] libmachine: (multinode-830937) Calling .GetIP
	I0408 12:08:48.224185  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.224480  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.224511  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.224690  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:48.226699  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.226998  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.227021  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.227165  403745 provision.go:143] copyHostCerts
	I0408 12:08:48.227208  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:08:48.227238  403745 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:08:48.227254  403745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:08:48.227318  403745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:08:48.227405  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:08:48.227428  403745 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:08:48.227435  403745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:08:48.227458  403745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:08:48.227531  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:08:48.227555  403745 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:08:48.227561  403745 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:08:48.227584  403745 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:08:48.227646  403745 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.multinode-830937 san=[127.0.0.1 192.168.39.209 localhost minikube multinode-830937]
	I0408 12:08:48.470465  403745 provision.go:177] copyRemoteCerts
	I0408 12:08:48.470539  403745 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:08:48.470608  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:48.473428  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.473905  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.473942  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.474215  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:48.474453  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.474713  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:48.474914  403745 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:08:48.563654  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0408 12:08:48.563763  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:08:48.596776  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0408 12:08:48.596866  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0408 12:08:48.624649  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0408 12:08:48.624728  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:08:48.651547  403745 provision.go:87] duration metric: took 430.350036ms to configureAuth
	I0408 12:08:48.651588  403745 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:08:48.651874  403745 config.go:182] Loaded profile config "multinode-830937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:08:48.651967  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:08:48.655233  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.655634  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:08:48.655668  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:08:48.655854  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:08:48.656110  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.656286  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:08:48.656517  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:08:48.656748  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:08:48.656970  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:08:48.656985  403745 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:10:19.438708  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:10:19.438749  403745 machine.go:97] duration metric: took 1m31.59434398s to provisionDockerMachine
	I0408 12:10:19.438769  403745 start.go:293] postStartSetup for "multinode-830937" (driver="kvm2")
	I0408 12:10:19.438786  403745 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:10:19.438849  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.439244  403745 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:10:19.439282  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:10:19.442802  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.443346  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.443381  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.443559  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:10:19.443793  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.443991  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:10:19.444159  403745 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:10:19.544501  403745 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:10:19.549079  403745 command_runner.go:130] > NAME=Buildroot
	I0408 12:10:19.549103  403745 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 12:10:19.549109  403745 command_runner.go:130] > ID=buildroot
	I0408 12:10:19.549116  403745 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 12:10:19.549136  403745 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 12:10:19.549405  403745 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:10:19.549427  403745 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:10:19.549499  403745 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:10:19.549600  403745 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:10:19.549614  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /etc/ssl/certs/3758172.pem
	I0408 12:10:19.549731  403745 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:10:19.560135  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:10:19.586870  403745 start.go:296] duration metric: took 148.083692ms for postStartSetup
	I0408 12:10:19.586925  403745 fix.go:56] duration metric: took 1m31.766981525s for fixHost
	I0408 12:10:19.586951  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:10:19.589958  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.590431  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.590477  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.590631  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:10:19.590851  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.591033  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.591241  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:10:19.591401  403745 main.go:141] libmachine: Using SSH client type: native
	I0408 12:10:19.591598  403745 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0408 12:10:19.591624  403745 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:10:19.705001  403745 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712578219.685087142
	
	I0408 12:10:19.705034  403745 fix.go:216] guest clock: 1712578219.685087142
	I0408 12:10:19.705046  403745 fix.go:229] Guest: 2024-04-08 12:10:19.685087142 +0000 UTC Remote: 2024-04-08 12:10:19.5869296 +0000 UTC m=+91.912507154 (delta=98.157542ms)
	I0408 12:10:19.705074  403745 fix.go:200] guest clock delta is within tolerance: 98.157542ms
	I0408 12:10:19.705095  403745 start.go:83] releasing machines lock for "multinode-830937", held for 1m31.885154805s
	I0408 12:10:19.705127  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.705419  403745 main.go:141] libmachine: (multinode-830937) Calling .GetIP
	I0408 12:10:19.708120  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.708658  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.708693  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.708861  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.709399  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.709606  403745 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:10:19.709710  403745 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:10:19.709752  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:10:19.709833  403745 ssh_runner.go:195] Run: cat /version.json
	I0408 12:10:19.709849  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:10:19.712528  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.712887  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.712935  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.712994  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.713072  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:10:19.713263  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.713403  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:19.713416  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:10:19.713430  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:19.713590  403745 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:10:19.713606  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:10:19.713858  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:10:19.714036  403745 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:10:19.714186  403745 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:10:19.793140  403745 command_runner.go:130] > {"iso_version": "v1.33.0-1712138767-18566", "kicbase_version": "v0.0.43-1711559786-18485", "minikube_version": "v1.33.0-beta.0", "commit": "5c97bd855810b9924fd5c0368bb36a4a341f7234"}
	I0408 12:10:19.793445  403745 ssh_runner.go:195] Run: systemctl --version
	I0408 12:10:19.835614  403745 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0408 12:10:19.835679  403745 command_runner.go:130] > systemd 252 (252)
	I0408 12:10:19.835722  403745 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 12:10:19.835799  403745 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:10:20.000529  403745 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 12:10:20.006940  403745 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 12:10:20.007001  403745 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:10:20.007053  403745 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:10:20.018382  403745 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 12:10:20.018415  403745 start.go:494] detecting cgroup driver to use...
	I0408 12:10:20.018484  403745 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:10:20.036437  403745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:10:20.051035  403745 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:10:20.051116  403745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:10:20.065936  403745 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:10:20.081077  403745 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:10:20.235818  403745 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:10:20.379070  403745 docker.go:233] disabling docker service ...
	I0408 12:10:20.379161  403745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:10:20.398308  403745 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:10:20.414085  403745 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:10:20.554603  403745 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:10:20.701792  403745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:10:20.716967  403745 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:10:20.736604  403745 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0408 12:10:20.736984  403745 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:10:20.737051  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.748395  403745 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:10:20.748480  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.759991  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.771682  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.783013  403745 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:10:20.795112  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.807071  403745 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.820107  403745 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:10:20.831983  403745 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:10:20.842888  403745 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 12:10:20.843002  403745 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:10:20.853466  403745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:10:20.999514  403745 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:10:21.262491  403745 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:10:21.262581  403745 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:10:21.267793  403745 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0408 12:10:21.267822  403745 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0408 12:10:21.267831  403745 command_runner.go:130] > Device: 0,22	Inode: 1307        Links: 1
	I0408 12:10:21.267840  403745 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 12:10:21.267848  403745 command_runner.go:130] > Access: 2024-04-08 12:10:21.118310357 +0000
	I0408 12:10:21.267856  403745 command_runner.go:130] > Modify: 2024-04-08 12:10:21.118310357 +0000
	I0408 12:10:21.267863  403745 command_runner.go:130] > Change: 2024-04-08 12:10:21.118310357 +0000
	I0408 12:10:21.267869  403745 command_runner.go:130] >  Birth: -
	I0408 12:10:21.268001  403745 start.go:562] Will wait 60s for crictl version
	I0408 12:10:21.268052  403745 ssh_runner.go:195] Run: which crictl
	I0408 12:10:21.272093  403745 command_runner.go:130] > /usr/bin/crictl
	I0408 12:10:21.272176  403745 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:10:21.315507  403745 command_runner.go:130] > Version:  0.1.0
	I0408 12:10:21.315542  403745 command_runner.go:130] > RuntimeName:  cri-o
	I0408 12:10:21.315548  403745 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0408 12:10:21.315556  403745 command_runner.go:130] > RuntimeApiVersion:  v1
	I0408 12:10:21.315579  403745 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:10:21.315643  403745 ssh_runner.go:195] Run: crio --version
	I0408 12:10:21.347531  403745 command_runner.go:130] > crio version 1.29.1
	I0408 12:10:21.347562  403745 command_runner.go:130] > Version:        1.29.1
	I0408 12:10:21.347575  403745 command_runner.go:130] > GitCommit:      unknown
	I0408 12:10:21.347582  403745 command_runner.go:130] > GitCommitDate:  unknown
	I0408 12:10:21.347587  403745 command_runner.go:130] > GitTreeState:   clean
	I0408 12:10:21.347598  403745 command_runner.go:130] > BuildDate:      2024-04-03T13:58:01Z
	I0408 12:10:21.347603  403745 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 12:10:21.347608  403745 command_runner.go:130] > Compiler:       gc
	I0408 12:10:21.347614  403745 command_runner.go:130] > Platform:       linux/amd64
	I0408 12:10:21.347621  403745 command_runner.go:130] > Linkmode:       dynamic
	I0408 12:10:21.347628  403745 command_runner.go:130] > BuildTags:      
	I0408 12:10:21.347635  403745 command_runner.go:130] >   containers_image_ostree_stub
	I0408 12:10:21.347642  403745 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 12:10:21.347650  403745 command_runner.go:130] >   btrfs_noversion
	I0408 12:10:21.347658  403745 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 12:10:21.347666  403745 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 12:10:21.347672  403745 command_runner.go:130] >   seccomp
	I0408 12:10:21.347680  403745 command_runner.go:130] > LDFlags:          unknown
	I0408 12:10:21.347701  403745 command_runner.go:130] > SeccompEnabled:   true
	I0408 12:10:21.347712  403745 command_runner.go:130] > AppArmorEnabled:  false
	I0408 12:10:21.347799  403745 ssh_runner.go:195] Run: crio --version
	I0408 12:10:21.378502  403745 command_runner.go:130] > crio version 1.29.1
	I0408 12:10:21.378537  403745 command_runner.go:130] > Version:        1.29.1
	I0408 12:10:21.378546  403745 command_runner.go:130] > GitCommit:      unknown
	I0408 12:10:21.378552  403745 command_runner.go:130] > GitCommitDate:  unknown
	I0408 12:10:21.378558  403745 command_runner.go:130] > GitTreeState:   clean
	I0408 12:10:21.378570  403745 command_runner.go:130] > BuildDate:      2024-04-03T13:58:01Z
	I0408 12:10:21.378575  403745 command_runner.go:130] > GoVersion:      go1.21.6
	I0408 12:10:21.378578  403745 command_runner.go:130] > Compiler:       gc
	I0408 12:10:21.378582  403745 command_runner.go:130] > Platform:       linux/amd64
	I0408 12:10:21.378587  403745 command_runner.go:130] > Linkmode:       dynamic
	I0408 12:10:21.378594  403745 command_runner.go:130] > BuildTags:      
	I0408 12:10:21.378601  403745 command_runner.go:130] >   containers_image_ostree_stub
	I0408 12:10:21.378609  403745 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0408 12:10:21.378615  403745 command_runner.go:130] >   btrfs_noversion
	I0408 12:10:21.378632  403745 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0408 12:10:21.378638  403745 command_runner.go:130] >   libdm_no_deferred_remove
	I0408 12:10:21.378643  403745 command_runner.go:130] >   seccomp
	I0408 12:10:21.378724  403745 command_runner.go:130] > LDFlags:          unknown
	I0408 12:10:21.378759  403745 command_runner.go:130] > SeccompEnabled:   true
	I0408 12:10:21.378767  403745 command_runner.go:130] > AppArmorEnabled:  false
	I0408 12:10:21.380809  403745 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:10:21.382388  403745 main.go:141] libmachine: (multinode-830937) Calling .GetIP
	I0408 12:10:21.384892  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:21.385368  403745 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:10:21.385400  403745 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:10:21.385617  403745 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:10:21.390567  403745 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0408 12:10:21.390690  403745 kubeadm.go:877] updating cluster {Name:multinode-830937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-830937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:10:21.390876  403745 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:10:21.390949  403745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:10:21.440570  403745 command_runner.go:130] > {
	I0408 12:10:21.440599  403745 command_runner.go:130] >   "images": [
	I0408 12:10:21.440605  403745 command_runner.go:130] >     {
	I0408 12:10:21.440623  403745 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0408 12:10:21.440630  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.440639  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0408 12:10:21.440644  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440651  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.440664  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0408 12:10:21.440679  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0408 12:10:21.440688  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440696  403745 command_runner.go:130] >       "size": "65291810",
	I0408 12:10:21.440706  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.440716  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.440736  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.440745  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.440752  403745 command_runner.go:130] >     },
	I0408 12:10:21.440761  403745 command_runner.go:130] >     {
	I0408 12:10:21.440772  403745 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0408 12:10:21.440790  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.440801  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0408 12:10:21.440814  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440824  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.440839  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0408 12:10:21.440854  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0408 12:10:21.440863  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440870  403745 command_runner.go:130] >       "size": "1363676",
	I0408 12:10:21.440881  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.440896  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.440905  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.440912  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.440922  403745 command_runner.go:130] >     },
	I0408 12:10:21.440929  403745 command_runner.go:130] >     {
	I0408 12:10:21.440943  403745 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 12:10:21.440950  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.440962  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 12:10:21.440970  403745 command_runner.go:130] >       ],
	I0408 12:10:21.440978  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.440994  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 12:10:21.441016  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 12:10:21.441030  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441037  403745 command_runner.go:130] >       "size": "31470524",
	I0408 12:10:21.441043  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.441050  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441060  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441069  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441078  403745 command_runner.go:130] >     },
	I0408 12:10:21.441085  403745 command_runner.go:130] >     {
	I0408 12:10:21.441099  403745 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0408 12:10:21.441108  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441117  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0408 12:10:21.441125  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441138  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441153  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0408 12:10:21.441177  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0408 12:10:21.441186  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441194  403745 command_runner.go:130] >       "size": "61245718",
	I0408 12:10:21.441204  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.441213  403745 command_runner.go:130] >       "username": "nonroot",
	I0408 12:10:21.441221  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441231  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441239  403745 command_runner.go:130] >     },
	I0408 12:10:21.441247  403745 command_runner.go:130] >     {
	I0408 12:10:21.441257  403745 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0408 12:10:21.441267  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441275  403745 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0408 12:10:21.441283  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441290  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441305  403745 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0408 12:10:21.441319  403745 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0408 12:10:21.441328  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441335  403745 command_runner.go:130] >       "size": "150779692",
	I0408 12:10:21.441345  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.441353  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.441375  403745 command_runner.go:130] >       },
	I0408 12:10:21.441393  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441410  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441419  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441425  403745 command_runner.go:130] >     },
	I0408 12:10:21.441432  403745 command_runner.go:130] >     {
	I0408 12:10:21.441445  403745 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0408 12:10:21.441455  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441464  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0408 12:10:21.441477  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441487  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441501  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0408 12:10:21.441518  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0408 12:10:21.441522  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441528  403745 command_runner.go:130] >       "size": "128508878",
	I0408 12:10:21.441534  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.441540  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.441546  403745 command_runner.go:130] >       },
	I0408 12:10:21.441552  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441562  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441568  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441576  403745 command_runner.go:130] >     },
	I0408 12:10:21.441579  403745 command_runner.go:130] >     {
	I0408 12:10:21.441588  403745 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0408 12:10:21.441592  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441599  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0408 12:10:21.441603  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441608  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441616  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0408 12:10:21.441626  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0408 12:10:21.441636  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441642  403745 command_runner.go:130] >       "size": "123142962",
	I0408 12:10:21.441647  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.441654  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.441663  403745 command_runner.go:130] >       },
	I0408 12:10:21.441680  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441691  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441705  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441714  403745 command_runner.go:130] >     },
	I0408 12:10:21.441718  403745 command_runner.go:130] >     {
	I0408 12:10:21.441729  403745 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0408 12:10:21.441739  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441748  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0408 12:10:21.441754  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441763  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441795  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0408 12:10:21.441809  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0408 12:10:21.441817  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441826  403745 command_runner.go:130] >       "size": "83634073",
	I0408 12:10:21.441834  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.441841  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441848  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441853  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441858  403745 command_runner.go:130] >     },
	I0408 12:10:21.441863  403745 command_runner.go:130] >     {
	I0408 12:10:21.441871  403745 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0408 12:10:21.441877  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.441883  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0408 12:10:21.441888  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441893  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.441903  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0408 12:10:21.441915  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0408 12:10:21.441920  403745 command_runner.go:130] >       ],
	I0408 12:10:21.441926  403745 command_runner.go:130] >       "size": "60724018",
	I0408 12:10:21.441931  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.441941  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.441947  403745 command_runner.go:130] >       },
	I0408 12:10:21.441956  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.441962  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.441971  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.441979  403745 command_runner.go:130] >     },
	I0408 12:10:21.441984  403745 command_runner.go:130] >     {
	I0408 12:10:21.441997  403745 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0408 12:10:21.442016  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.442026  403745 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0408 12:10:21.442034  403745 command_runner.go:130] >       ],
	I0408 12:10:21.442040  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.442051  403745 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0408 12:10:21.442061  403745 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0408 12:10:21.442066  403745 command_runner.go:130] >       ],
	I0408 12:10:21.442071  403745 command_runner.go:130] >       "size": "750414",
	I0408 12:10:21.442077  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.442083  403745 command_runner.go:130] >         "value": "65535"
	I0408 12:10:21.442088  403745 command_runner.go:130] >       },
	I0408 12:10:21.442094  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.442104  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.442111  403745 command_runner.go:130] >       "pinned": true
	I0408 12:10:21.442120  403745 command_runner.go:130] >     }
	I0408 12:10:21.442125  403745 command_runner.go:130] >   ]
	I0408 12:10:21.442130  403745 command_runner.go:130] > }
	I0408 12:10:21.442422  403745 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:10:21.442442  403745 crio.go:433] Images already preloaded, skipping extraction
	I0408 12:10:21.442514  403745 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:10:21.482151  403745 command_runner.go:130] > {
	I0408 12:10:21.482181  403745 command_runner.go:130] >   "images": [
	I0408 12:10:21.482186  403745 command_runner.go:130] >     {
	I0408 12:10:21.482193  403745 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0408 12:10:21.482205  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482212  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0408 12:10:21.482215  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482220  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482234  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0408 12:10:21.482249  403745 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0408 12:10:21.482256  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482263  403745 command_runner.go:130] >       "size": "65291810",
	I0408 12:10:21.482271  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.482275  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.482291  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482297  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482301  403745 command_runner.go:130] >     },
	I0408 12:10:21.482304  403745 command_runner.go:130] >     {
	I0408 12:10:21.482317  403745 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0408 12:10:21.482324  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482336  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0408 12:10:21.482345  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482351  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482365  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0408 12:10:21.482377  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0408 12:10:21.482383  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482387  403745 command_runner.go:130] >       "size": "1363676",
	I0408 12:10:21.482394  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.482404  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.482414  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482425  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482449  403745 command_runner.go:130] >     },
	I0408 12:10:21.482458  403745 command_runner.go:130] >     {
	I0408 12:10:21.482468  403745 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0408 12:10:21.482477  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482487  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0408 12:10:21.482496  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482507  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482522  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0408 12:10:21.482537  403745 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0408 12:10:21.482552  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482562  403745 command_runner.go:130] >       "size": "31470524",
	I0408 12:10:21.482566  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.482570  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.482577  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482588  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482597  403745 command_runner.go:130] >     },
	I0408 12:10:21.482603  403745 command_runner.go:130] >     {
	I0408 12:10:21.482616  403745 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0408 12:10:21.482626  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482637  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0408 12:10:21.482645  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482652  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482660  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0408 12:10:21.482691  403745 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0408 12:10:21.482703  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482709  403745 command_runner.go:130] >       "size": "61245718",
	I0408 12:10:21.482719  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.482728  403745 command_runner.go:130] >       "username": "nonroot",
	I0408 12:10:21.482738  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482747  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482756  403745 command_runner.go:130] >     },
	I0408 12:10:21.482766  403745 command_runner.go:130] >     {
	I0408 12:10:21.482778  403745 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0408 12:10:21.482788  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482798  403745 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0408 12:10:21.482806  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482816  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.482824  403745 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0408 12:10:21.482837  403745 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0408 12:10:21.482847  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482856  403745 command_runner.go:130] >       "size": "150779692",
	I0408 12:10:21.482872  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.482881  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.482890  403745 command_runner.go:130] >       },
	I0408 12:10:21.482900  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.482911  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.482919  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.482928  403745 command_runner.go:130] >     },
	I0408 12:10:21.482937  403745 command_runner.go:130] >     {
	I0408 12:10:21.482950  403745 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0408 12:10:21.482960  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.482971  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0408 12:10:21.482980  403745 command_runner.go:130] >       ],
	I0408 12:10:21.482989  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483002  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0408 12:10:21.483014  403745 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0408 12:10:21.483024  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483035  403745 command_runner.go:130] >       "size": "128508878",
	I0408 12:10:21.483044  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.483054  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.483063  403745 command_runner.go:130] >       },
	I0408 12:10:21.483073  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483082  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483089  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.483092  403745 command_runner.go:130] >     },
	I0408 12:10:21.483096  403745 command_runner.go:130] >     {
	I0408 12:10:21.483110  403745 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0408 12:10:21.483120  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.483132  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0408 12:10:21.483140  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483150  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483162  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0408 12:10:21.483174  403745 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0408 12:10:21.483186  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483196  403745 command_runner.go:130] >       "size": "123142962",
	I0408 12:10:21.483213  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.483223  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.483231  403745 command_runner.go:130] >       },
	I0408 12:10:21.483240  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483250  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483259  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.483273  403745 command_runner.go:130] >     },
	I0408 12:10:21.483281  403745 command_runner.go:130] >     {
	I0408 12:10:21.483295  403745 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0408 12:10:21.483305  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.483316  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0408 12:10:21.483326  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483336  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483363  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0408 12:10:21.483381  403745 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0408 12:10:21.483386  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483391  403745 command_runner.go:130] >       "size": "83634073",
	I0408 12:10:21.483395  403745 command_runner.go:130] >       "uid": null,
	I0408 12:10:21.483401  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483408  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483414  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.483422  403745 command_runner.go:130] >     },
	I0408 12:10:21.483428  403745 command_runner.go:130] >     {
	I0408 12:10:21.483447  403745 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0408 12:10:21.483457  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.483467  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0408 12:10:21.483477  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483485  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483500  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0408 12:10:21.483516  403745 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0408 12:10:21.483524  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483532  403745 command_runner.go:130] >       "size": "60724018",
	I0408 12:10:21.483541  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.483549  403745 command_runner.go:130] >         "value": "0"
	I0408 12:10:21.483557  403745 command_runner.go:130] >       },
	I0408 12:10:21.483564  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483571  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483580  403745 command_runner.go:130] >       "pinned": false
	I0408 12:10:21.483588  403745 command_runner.go:130] >     },
	I0408 12:10:21.483597  403745 command_runner.go:130] >     {
	I0408 12:10:21.483608  403745 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0408 12:10:21.483618  403745 command_runner.go:130] >       "repoTags": [
	I0408 12:10:21.483635  403745 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0408 12:10:21.483649  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483657  403745 command_runner.go:130] >       "repoDigests": [
	I0408 12:10:21.483672  403745 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0408 12:10:21.483701  403745 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0408 12:10:21.483710  403745 command_runner.go:130] >       ],
	I0408 12:10:21.483718  403745 command_runner.go:130] >       "size": "750414",
	I0408 12:10:21.483726  403745 command_runner.go:130] >       "uid": {
	I0408 12:10:21.483734  403745 command_runner.go:130] >         "value": "65535"
	I0408 12:10:21.483742  403745 command_runner.go:130] >       },
	I0408 12:10:21.483749  403745 command_runner.go:130] >       "username": "",
	I0408 12:10:21.483760  403745 command_runner.go:130] >       "spec": null,
	I0408 12:10:21.483769  403745 command_runner.go:130] >       "pinned": true
	I0408 12:10:21.483775  403745 command_runner.go:130] >     }
	I0408 12:10:21.483781  403745 command_runner.go:130] >   ]
	I0408 12:10:21.483789  403745 command_runner.go:130] > }
	I0408 12:10:21.483927  403745 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:10:21.483941  403745 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:10:21.483952  403745 kubeadm.go:928] updating node { 192.168.39.209 8443 v1.29.3 crio true true} ...
	I0408 12:10:21.484088  403745 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-830937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-830937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:10:21.484170  403745 ssh_runner.go:195] Run: crio config
	I0408 12:10:21.518981  403745 command_runner.go:130] ! time="2024-04-08 12:10:21.499072989Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0408 12:10:21.525800  403745 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0408 12:10:21.533738  403745 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0408 12:10:21.533769  403745 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0408 12:10:21.533780  403745 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0408 12:10:21.533786  403745 command_runner.go:130] > #
	I0408 12:10:21.533796  403745 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0408 12:10:21.533806  403745 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0408 12:10:21.533821  403745 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0408 12:10:21.533831  403745 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0408 12:10:21.533841  403745 command_runner.go:130] > # reload'.
	I0408 12:10:21.533855  403745 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0408 12:10:21.533868  403745 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0408 12:10:21.533882  403745 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0408 12:10:21.533892  403745 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0408 12:10:21.533901  403745 command_runner.go:130] > [crio]
	I0408 12:10:21.533912  403745 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0408 12:10:21.533922  403745 command_runner.go:130] > # containers images, in this directory.
	I0408 12:10:21.533929  403745 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0408 12:10:21.533947  403745 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0408 12:10:21.533958  403745 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0408 12:10:21.533973  403745 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0408 12:10:21.533982  403745 command_runner.go:130] > # imagestore = ""
	I0408 12:10:21.533992  403745 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0408 12:10:21.534003  403745 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0408 12:10:21.534011  403745 command_runner.go:130] > storage_driver = "overlay"
	I0408 12:10:21.534016  403745 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0408 12:10:21.534026  403745 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0408 12:10:21.534035  403745 command_runner.go:130] > storage_option = [
	I0408 12:10:21.534046  403745 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0408 12:10:21.534054  403745 command_runner.go:130] > ]
	I0408 12:10:21.534067  403745 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0408 12:10:21.534080  403745 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0408 12:10:21.534090  403745 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0408 12:10:21.534101  403745 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0408 12:10:21.534107  403745 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0408 12:10:21.534116  403745 command_runner.go:130] > # always happen on a node reboot
	I0408 12:10:21.534128  403745 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0408 12:10:21.534152  403745 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0408 12:10:21.534164  403745 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0408 12:10:21.534177  403745 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0408 12:10:21.534186  403745 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0408 12:10:21.534197  403745 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0408 12:10:21.534216  403745 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0408 12:10:21.534227  403745 command_runner.go:130] > # internal_wipe = true
	I0408 12:10:21.534242  403745 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0408 12:10:21.534259  403745 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0408 12:10:21.534268  403745 command_runner.go:130] > # internal_repair = false
	I0408 12:10:21.534277  403745 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0408 12:10:21.534285  403745 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0408 12:10:21.534297  403745 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0408 12:10:21.534308  403745 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0408 12:10:21.534323  403745 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0408 12:10:21.534333  403745 command_runner.go:130] > [crio.api]
	I0408 12:10:21.534345  403745 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0408 12:10:21.534355  403745 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0408 12:10:21.534363  403745 command_runner.go:130] > # IP address on which the stream server will listen.
	I0408 12:10:21.534369  403745 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0408 12:10:21.534383  403745 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0408 12:10:21.534394  403745 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0408 12:10:21.534403  403745 command_runner.go:130] > # stream_port = "0"
	I0408 12:10:21.534415  403745 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0408 12:10:21.534424  403745 command_runner.go:130] > # stream_enable_tls = false
	I0408 12:10:21.534436  403745 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0408 12:10:21.534449  403745 command_runner.go:130] > # stream_idle_timeout = ""
	I0408 12:10:21.534460  403745 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0408 12:10:21.534474  403745 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0408 12:10:21.534482  403745 command_runner.go:130] > # minutes.
	I0408 12:10:21.534489  403745 command_runner.go:130] > # stream_tls_cert = ""
	I0408 12:10:21.534501  403745 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0408 12:10:21.534513  403745 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0408 12:10:21.534523  403745 command_runner.go:130] > # stream_tls_key = ""
	I0408 12:10:21.534533  403745 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0408 12:10:21.534544  403745 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0408 12:10:21.534573  403745 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0408 12:10:21.534587  403745 command_runner.go:130] > # stream_tls_ca = ""
	I0408 12:10:21.534598  403745 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 12:10:21.534609  403745 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0408 12:10:21.534620  403745 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0408 12:10:21.534628  403745 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0408 12:10:21.534641  403745 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0408 12:10:21.534653  403745 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0408 12:10:21.534668  403745 command_runner.go:130] > [crio.runtime]
	I0408 12:10:21.534680  403745 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0408 12:10:21.534693  403745 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0408 12:10:21.534701  403745 command_runner.go:130] > # "nofile=1024:2048"
	I0408 12:10:21.534710  403745 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0408 12:10:21.534719  403745 command_runner.go:130] > # default_ulimits = [
	I0408 12:10:21.534729  403745 command_runner.go:130] > # ]
	I0408 12:10:21.534742  403745 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0408 12:10:21.534751  403745 command_runner.go:130] > # no_pivot = false
	I0408 12:10:21.534764  403745 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0408 12:10:21.534776  403745 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0408 12:10:21.534794  403745 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0408 12:10:21.534805  403745 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0408 12:10:21.534817  403745 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0408 12:10:21.534830  403745 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 12:10:21.534841  403745 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0408 12:10:21.534850  403745 command_runner.go:130] > # Cgroup setting for conmon
	I0408 12:10:21.534863  403745 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0408 12:10:21.534873  403745 command_runner.go:130] > conmon_cgroup = "pod"
	I0408 12:10:21.534882  403745 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0408 12:10:21.534892  403745 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0408 12:10:21.534906  403745 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0408 12:10:21.534915  403745 command_runner.go:130] > conmon_env = [
	I0408 12:10:21.534928  403745 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 12:10:21.534936  403745 command_runner.go:130] > ]
	I0408 12:10:21.534947  403745 command_runner.go:130] > # Additional environment variables to set for all the
	I0408 12:10:21.534957  403745 command_runner.go:130] > # containers. These are overridden if set in the
	I0408 12:10:21.534966  403745 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0408 12:10:21.534973  403745 command_runner.go:130] > # default_env = [
	I0408 12:10:21.534978  403745 command_runner.go:130] > # ]
	I0408 12:10:21.534991  403745 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0408 12:10:21.535005  403745 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0408 12:10:21.535014  403745 command_runner.go:130] > # selinux = false
	I0408 12:10:21.535025  403745 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0408 12:10:21.535038  403745 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0408 12:10:21.535049  403745 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0408 12:10:21.535063  403745 command_runner.go:130] > # seccomp_profile = ""
	I0408 12:10:21.535075  403745 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0408 12:10:21.535088  403745 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0408 12:10:21.535100  403745 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0408 12:10:21.535110  403745 command_runner.go:130] > # which might increase security.
	I0408 12:10:21.535120  403745 command_runner.go:130] > # This option is currently deprecated,
	I0408 12:10:21.535132  403745 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0408 12:10:21.535140  403745 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0408 12:10:21.535150  403745 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0408 12:10:21.535163  403745 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0408 12:10:21.535180  403745 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0408 12:10:21.535192  403745 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0408 12:10:21.535203  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.535213  403745 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0408 12:10:21.535224  403745 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0408 12:10:21.535235  403745 command_runner.go:130] > # the cgroup blockio controller.
	I0408 12:10:21.535245  403745 command_runner.go:130] > # blockio_config_file = ""
	I0408 12:10:21.535259  403745 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0408 12:10:21.535269  403745 command_runner.go:130] > # blockio parameters.
	I0408 12:10:21.535278  403745 command_runner.go:130] > # blockio_reload = false
	I0408 12:10:21.535291  403745 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0408 12:10:21.535301  403745 command_runner.go:130] > # irqbalance daemon.
	I0408 12:10:21.535311  403745 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0408 12:10:21.535320  403745 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0408 12:10:21.535334  403745 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0408 12:10:21.535347  403745 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0408 12:10:21.535359  403745 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0408 12:10:21.535376  403745 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0408 12:10:21.535387  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.535395  403745 command_runner.go:130] > # rdt_config_file = ""
	I0408 12:10:21.535400  403745 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0408 12:10:21.535410  403745 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0408 12:10:21.535470  403745 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0408 12:10:21.535482  403745 command_runner.go:130] > # separate_pull_cgroup = ""
	I0408 12:10:21.535489  403745 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0408 12:10:21.535499  403745 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0408 12:10:21.535517  403745 command_runner.go:130] > # will be added.
	I0408 12:10:21.535528  403745 command_runner.go:130] > # default_capabilities = [
	I0408 12:10:21.535537  403745 command_runner.go:130] > # 	"CHOWN",
	I0408 12:10:21.535547  403745 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0408 12:10:21.535555  403745 command_runner.go:130] > # 	"FSETID",
	I0408 12:10:21.535564  403745 command_runner.go:130] > # 	"FOWNER",
	I0408 12:10:21.535571  403745 command_runner.go:130] > # 	"SETGID",
	I0408 12:10:21.535575  403745 command_runner.go:130] > # 	"SETUID",
	I0408 12:10:21.535578  403745 command_runner.go:130] > # 	"SETPCAP",
	I0408 12:10:21.535588  403745 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0408 12:10:21.535597  403745 command_runner.go:130] > # 	"KILL",
	I0408 12:10:21.535606  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535620  403745 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0408 12:10:21.535633  403745 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0408 12:10:21.535646  403745 command_runner.go:130] > # add_inheritable_capabilities = false
	I0408 12:10:21.535657  403745 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0408 12:10:21.535665  403745 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 12:10:21.535671  403745 command_runner.go:130] > default_sysctls = [
	I0408 12:10:21.535681  403745 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0408 12:10:21.535696  403745 command_runner.go:130] > ]
	I0408 12:10:21.535705  403745 command_runner.go:130] > # List of devices on the host that a
	I0408 12:10:21.535716  403745 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0408 12:10:21.535723  403745 command_runner.go:130] > # allowed_devices = [
	I0408 12:10:21.535729  403745 command_runner.go:130] > # 	"/dev/fuse",
	I0408 12:10:21.535734  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535739  403745 command_runner.go:130] > # List of additional devices. specified as
	I0408 12:10:21.535751  403745 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0408 12:10:21.535769  403745 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0408 12:10:21.535782  403745 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0408 12:10:21.535791  403745 command_runner.go:130] > # additional_devices = [
	I0408 12:10:21.535799  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535811  403745 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0408 12:10:21.535820  403745 command_runner.go:130] > # cdi_spec_dirs = [
	I0408 12:10:21.535827  403745 command_runner.go:130] > # 	"/etc/cdi",
	I0408 12:10:21.535831  403745 command_runner.go:130] > # 	"/var/run/cdi",
	I0408 12:10:21.535839  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535859  403745 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0408 12:10:21.535872  403745 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0408 12:10:21.535881  403745 command_runner.go:130] > # Defaults to false.
	I0408 12:10:21.535892  403745 command_runner.go:130] > # device_ownership_from_security_context = false
	I0408 12:10:21.535904  403745 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0408 12:10:21.535914  403745 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0408 12:10:21.535922  403745 command_runner.go:130] > # hooks_dir = [
	I0408 12:10:21.535933  403745 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0408 12:10:21.535938  403745 command_runner.go:130] > # ]
	I0408 12:10:21.535951  403745 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0408 12:10:21.535963  403745 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0408 12:10:21.535974  403745 command_runner.go:130] > # its default mounts from the following two files:
	I0408 12:10:21.535982  403745 command_runner.go:130] > #
	I0408 12:10:21.535994  403745 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0408 12:10:21.536003  403745 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0408 12:10:21.536013  403745 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0408 12:10:21.536022  403745 command_runner.go:130] > #
	I0408 12:10:21.536031  403745 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0408 12:10:21.536044  403745 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0408 12:10:21.536057  403745 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0408 12:10:21.536071  403745 command_runner.go:130] > #      only add mounts it finds in this file.
	I0408 12:10:21.536079  403745 command_runner.go:130] > #
	I0408 12:10:21.536085  403745 command_runner.go:130] > # default_mounts_file = ""
	I0408 12:10:21.536092  403745 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0408 12:10:21.536106  403745 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0408 12:10:21.536116  403745 command_runner.go:130] > pids_limit = 1024
	I0408 12:10:21.536129  403745 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0408 12:10:21.536142  403745 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0408 12:10:21.536154  403745 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0408 12:10:21.536168  403745 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0408 12:10:21.536174  403745 command_runner.go:130] > # log_size_max = -1
	I0408 12:10:21.536185  403745 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0408 12:10:21.536195  403745 command_runner.go:130] > # log_to_journald = false
	I0408 12:10:21.536208  403745 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0408 12:10:21.536219  403745 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0408 12:10:21.536226  403745 command_runner.go:130] > # Path to directory for container attach sockets.
	I0408 12:10:21.536243  403745 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0408 12:10:21.536254  403745 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0408 12:10:21.536261  403745 command_runner.go:130] > # bind_mount_prefix = ""
	I0408 12:10:21.536268  403745 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0408 12:10:21.536277  403745 command_runner.go:130] > # read_only = false
	I0408 12:10:21.536291  403745 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0408 12:10:21.536303  403745 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0408 12:10:21.536312  403745 command_runner.go:130] > # live configuration reload.
	I0408 12:10:21.536322  403745 command_runner.go:130] > # log_level = "info"
	I0408 12:10:21.536333  403745 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0408 12:10:21.536342  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.536349  403745 command_runner.go:130] > # log_filter = ""
	I0408 12:10:21.536358  403745 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0408 12:10:21.536374  403745 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0408 12:10:21.536383  403745 command_runner.go:130] > # separated by comma.
	I0408 12:10:21.536394  403745 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 12:10:21.536404  403745 command_runner.go:130] > # uid_mappings = ""
	I0408 12:10:21.536416  403745 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0408 12:10:21.536427  403745 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0408 12:10:21.536434  403745 command_runner.go:130] > # separated by comma.
	I0408 12:10:21.536449  403745 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 12:10:21.536462  403745 command_runner.go:130] > # gid_mappings = ""
	I0408 12:10:21.536475  403745 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0408 12:10:21.536486  403745 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 12:10:21.536499  403745 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 12:10:21.536512  403745 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 12:10:21.536519  403745 command_runner.go:130] > # minimum_mappable_uid = -1
	I0408 12:10:21.536528  403745 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0408 12:10:21.536540  403745 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0408 12:10:21.536553  403745 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0408 12:10:21.536568  403745 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0408 12:10:21.536577  403745 command_runner.go:130] > # minimum_mappable_gid = -1
	I0408 12:10:21.536589  403745 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0408 12:10:21.536601  403745 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0408 12:10:21.536610  403745 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0408 12:10:21.536619  403745 command_runner.go:130] > # ctr_stop_timeout = 30
	I0408 12:10:21.536638  403745 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0408 12:10:21.536651  403745 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0408 12:10:21.536662  403745 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0408 12:10:21.536676  403745 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0408 12:10:21.536685  403745 command_runner.go:130] > drop_infra_ctr = false
	I0408 12:10:21.536694  403745 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0408 12:10:21.536705  403745 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0408 12:10:21.536720  403745 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0408 12:10:21.536730  403745 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0408 12:10:21.536744  403745 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0408 12:10:21.536755  403745 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0408 12:10:21.536767  403745 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0408 12:10:21.536775  403745 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0408 12:10:21.536783  403745 command_runner.go:130] > # shared_cpuset = ""
	I0408 12:10:21.536796  403745 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0408 12:10:21.536807  403745 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0408 12:10:21.536817  403745 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0408 12:10:21.536831  403745 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0408 12:10:21.536841  403745 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0408 12:10:21.536852  403745 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0408 12:10:21.536864  403745 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0408 12:10:21.536873  403745 command_runner.go:130] > # enable_criu_support = false
	I0408 12:10:21.536885  403745 command_runner.go:130] > # Enable/disable the generation of the container,
	I0408 12:10:21.536897  403745 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0408 12:10:21.536907  403745 command_runner.go:130] > # enable_pod_events = false
	I0408 12:10:21.536919  403745 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 12:10:21.536931  403745 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0408 12:10:21.536942  403745 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0408 12:10:21.536950  403745 command_runner.go:130] > # default_runtime = "runc"
	I0408 12:10:21.536956  403745 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0408 12:10:21.536977  403745 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0408 12:10:21.536993  403745 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0408 12:10:21.537004  403745 command_runner.go:130] > # creation as a file is not desired either.
	I0408 12:10:21.537019  403745 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0408 12:10:21.537029  403745 command_runner.go:130] > # the hostname is being managed dynamically.
	I0408 12:10:21.537036  403745 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0408 12:10:21.537046  403745 command_runner.go:130] > # ]
	I0408 12:10:21.537060  403745 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0408 12:10:21.537073  403745 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0408 12:10:21.537085  403745 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0408 12:10:21.537097  403745 command_runner.go:130] > # Each entry in the table should follow the format:
	I0408 12:10:21.537105  403745 command_runner.go:130] > #
	I0408 12:10:21.537115  403745 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0408 12:10:21.537123  403745 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0408 12:10:21.537179  403745 command_runner.go:130] > # runtime_type = "oci"
	I0408 12:10:21.537191  403745 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0408 12:10:21.537199  403745 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0408 12:10:21.537207  403745 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0408 12:10:21.537212  403745 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0408 12:10:21.537220  403745 command_runner.go:130] > # monitor_env = []
	I0408 12:10:21.537231  403745 command_runner.go:130] > # privileged_without_host_devices = false
	I0408 12:10:21.537241  403745 command_runner.go:130] > # allowed_annotations = []
	I0408 12:10:21.537253  403745 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0408 12:10:21.537261  403745 command_runner.go:130] > # Where:
	I0408 12:10:21.537273  403745 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0408 12:10:21.537285  403745 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0408 12:10:21.537295  403745 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0408 12:10:21.537305  403745 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0408 12:10:21.537317  403745 command_runner.go:130] > #   in $PATH.
	I0408 12:10:21.537330  403745 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0408 12:10:21.537342  403745 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0408 12:10:21.537353  403745 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0408 12:10:21.537362  403745 command_runner.go:130] > #   state.
	I0408 12:10:21.537374  403745 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0408 12:10:21.537383  403745 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0408 12:10:21.537394  403745 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0408 12:10:21.537406  403745 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0408 12:10:21.537418  403745 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0408 12:10:21.537431  403745 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0408 12:10:21.537445  403745 command_runner.go:130] > #   The currently recognized values are:
	I0408 12:10:21.537458  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0408 12:10:21.537470  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0408 12:10:21.537486  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0408 12:10:21.537504  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0408 12:10:21.537519  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0408 12:10:21.537533  403745 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0408 12:10:21.537546  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0408 12:10:21.537556  403745 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0408 12:10:21.537567  403745 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0408 12:10:21.537580  403745 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0408 12:10:21.537591  403745 command_runner.go:130] > #   deprecated option "conmon".
	I0408 12:10:21.537605  403745 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0408 12:10:21.537617  403745 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0408 12:10:21.537636  403745 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0408 12:10:21.537644  403745 command_runner.go:130] > #   should be moved to the container's cgroup
	I0408 12:10:21.537652  403745 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0408 12:10:21.537663  403745 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0408 12:10:21.537676  403745 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0408 12:10:21.537688  403745 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0408 12:10:21.537696  403745 command_runner.go:130] > #
	I0408 12:10:21.537707  403745 command_runner.go:130] > # Using the seccomp notifier feature:
	I0408 12:10:21.537717  403745 command_runner.go:130] > #
	I0408 12:10:21.537728  403745 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0408 12:10:21.537739  403745 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0408 12:10:21.537747  403745 command_runner.go:130] > #
	I0408 12:10:21.537760  403745 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0408 12:10:21.537773  403745 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0408 12:10:21.537780  403745 command_runner.go:130] > #
	I0408 12:10:21.537793  403745 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0408 12:10:21.537801  403745 command_runner.go:130] > # feature.
	I0408 12:10:21.537808  403745 command_runner.go:130] > #
	I0408 12:10:21.537815  403745 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0408 12:10:21.537826  403745 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0408 12:10:21.537840  403745 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0408 12:10:21.537853  403745 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0408 12:10:21.537865  403745 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0408 12:10:21.537873  403745 command_runner.go:130] > #
	I0408 12:10:21.537882  403745 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0408 12:10:21.537899  403745 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0408 12:10:21.537905  403745 command_runner.go:130] > #
	I0408 12:10:21.537914  403745 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0408 12:10:21.537927  403745 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0408 12:10:21.537935  403745 command_runner.go:130] > #
	I0408 12:10:21.537948  403745 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0408 12:10:21.537960  403745 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0408 12:10:21.537969  403745 command_runner.go:130] > # limitation.
	I0408 12:10:21.537980  403745 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0408 12:10:21.537988  403745 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0408 12:10:21.537996  403745 command_runner.go:130] > runtime_type = "oci"
	I0408 12:10:21.538002  403745 command_runner.go:130] > runtime_root = "/run/runc"
	I0408 12:10:21.538011  403745 command_runner.go:130] > runtime_config_path = ""
	I0408 12:10:21.538023  403745 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0408 12:10:21.538031  403745 command_runner.go:130] > monitor_cgroup = "pod"
	I0408 12:10:21.538041  403745 command_runner.go:130] > monitor_exec_cgroup = ""
	I0408 12:10:21.538050  403745 command_runner.go:130] > monitor_env = [
	I0408 12:10:21.538062  403745 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0408 12:10:21.538069  403745 command_runner.go:130] > ]
	I0408 12:10:21.538074  403745 command_runner.go:130] > privileged_without_host_devices = false
	I0408 12:10:21.538082  403745 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0408 12:10:21.538091  403745 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0408 12:10:21.538101  403745 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0408 12:10:21.538117  403745 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0408 12:10:21.538136  403745 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0408 12:10:21.538149  403745 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0408 12:10:21.538165  403745 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0408 12:10:21.538178  403745 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0408 12:10:21.538186  403745 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0408 12:10:21.538193  403745 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0408 12:10:21.538199  403745 command_runner.go:130] > # Example:
	I0408 12:10:21.538204  403745 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0408 12:10:21.538211  403745 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0408 12:10:21.538216  403745 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0408 12:10:21.538227  403745 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0408 12:10:21.538233  403745 command_runner.go:130] > # cpuset = 0
	I0408 12:10:21.538242  403745 command_runner.go:130] > # cpushares = "0-1"
	I0408 12:10:21.538251  403745 command_runner.go:130] > # Where:
	I0408 12:10:21.538262  403745 command_runner.go:130] > # The workload name is workload-type.
	I0408 12:10:21.538276  403745 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0408 12:10:21.538287  403745 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0408 12:10:21.538299  403745 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0408 12:10:21.538314  403745 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0408 12:10:21.538325  403745 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0408 12:10:21.538333  403745 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0408 12:10:21.538339  403745 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0408 12:10:21.538345  403745 command_runner.go:130] > # Default value is set to true
	I0408 12:10:21.538350  403745 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0408 12:10:21.538357  403745 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0408 12:10:21.538364  403745 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0408 12:10:21.538368  403745 command_runner.go:130] > # Default value is set to 'false'
	I0408 12:10:21.538375  403745 command_runner.go:130] > # disable_hostport_mapping = false
	I0408 12:10:21.538382  403745 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0408 12:10:21.538384  403745 command_runner.go:130] > #
	I0408 12:10:21.538390  403745 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0408 12:10:21.538395  403745 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0408 12:10:21.538401  403745 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0408 12:10:21.538406  403745 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0408 12:10:21.538414  403745 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0408 12:10:21.538418  403745 command_runner.go:130] > [crio.image]
	I0408 12:10:21.538423  403745 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0408 12:10:21.538428  403745 command_runner.go:130] > # default_transport = "docker://"
	I0408 12:10:21.538433  403745 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0408 12:10:21.538447  403745 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0408 12:10:21.538451  403745 command_runner.go:130] > # global_auth_file = ""
	I0408 12:10:21.538455  403745 command_runner.go:130] > # The image used to instantiate infra containers.
	I0408 12:10:21.538460  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.538464  403745 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0408 12:10:21.538470  403745 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0408 12:10:21.538478  403745 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0408 12:10:21.538485  403745 command_runner.go:130] > # This option supports live configuration reload.
	I0408 12:10:21.538496  403745 command_runner.go:130] > # pause_image_auth_file = ""
	I0408 12:10:21.538512  403745 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0408 12:10:21.538520  403745 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0408 12:10:21.538528  403745 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0408 12:10:21.538534  403745 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0408 12:10:21.538541  403745 command_runner.go:130] > # pause_command = "/pause"
	I0408 12:10:21.538547  403745 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0408 12:10:21.538554  403745 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0408 12:10:21.538561  403745 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0408 12:10:21.538571  403745 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0408 12:10:21.538576  403745 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0408 12:10:21.538584  403745 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0408 12:10:21.538591  403745 command_runner.go:130] > # pinned_images = [
	I0408 12:10:21.538594  403745 command_runner.go:130] > # ]
	I0408 12:10:21.538602  403745 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0408 12:10:21.538611  403745 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0408 12:10:21.538617  403745 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0408 12:10:21.538624  403745 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0408 12:10:21.538631  403745 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0408 12:10:21.538635  403745 command_runner.go:130] > # signature_policy = ""
	I0408 12:10:21.538642  403745 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0408 12:10:21.538651  403745 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0408 12:10:21.538659  403745 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0408 12:10:21.538670  403745 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0408 12:10:21.538677  403745 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0408 12:10:21.538684  403745 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0408 12:10:21.538690  403745 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0408 12:10:21.538698  403745 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0408 12:10:21.538705  403745 command_runner.go:130] > # changing them here.
	I0408 12:10:21.538709  403745 command_runner.go:130] > # insecure_registries = [
	I0408 12:10:21.538715  403745 command_runner.go:130] > # ]
	I0408 12:10:21.538721  403745 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0408 12:10:21.538728  403745 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0408 12:10:21.538732  403745 command_runner.go:130] > # image_volumes = "mkdir"
	I0408 12:10:21.538737  403745 command_runner.go:130] > # Temporary directory to use for storing big files
	I0408 12:10:21.538743  403745 command_runner.go:130] > # big_files_temporary_dir = ""
	I0408 12:10:21.538748  403745 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0408 12:10:21.538764  403745 command_runner.go:130] > # CNI plugins.
	I0408 12:10:21.538770  403745 command_runner.go:130] > [crio.network]
	I0408 12:10:21.538778  403745 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0408 12:10:21.538787  403745 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0408 12:10:21.538793  403745 command_runner.go:130] > # cni_default_network = ""
	I0408 12:10:21.538798  403745 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0408 12:10:21.538805  403745 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0408 12:10:21.538810  403745 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0408 12:10:21.538816  403745 command_runner.go:130] > # plugin_dirs = [
	I0408 12:10:21.538820  403745 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0408 12:10:21.538825  403745 command_runner.go:130] > # ]
	I0408 12:10:21.538834  403745 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0408 12:10:21.538840  403745 command_runner.go:130] > [crio.metrics]
	I0408 12:10:21.538844  403745 command_runner.go:130] > # Globally enable or disable metrics support.
	I0408 12:10:21.538850  403745 command_runner.go:130] > enable_metrics = true
	I0408 12:10:21.538855  403745 command_runner.go:130] > # Specify enabled metrics collectors.
	I0408 12:10:21.538862  403745 command_runner.go:130] > # Per default all metrics are enabled.
	I0408 12:10:21.538874  403745 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0408 12:10:21.538887  403745 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0408 12:10:21.538896  403745 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0408 12:10:21.538903  403745 command_runner.go:130] > # metrics_collectors = [
	I0408 12:10:21.538906  403745 command_runner.go:130] > # 	"operations",
	I0408 12:10:21.538912  403745 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0408 12:10:21.538917  403745 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0408 12:10:21.538923  403745 command_runner.go:130] > # 	"operations_errors",
	I0408 12:10:21.538928  403745 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0408 12:10:21.538934  403745 command_runner.go:130] > # 	"image_pulls_by_name",
	I0408 12:10:21.538938  403745 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0408 12:10:21.538947  403745 command_runner.go:130] > # 	"image_pulls_failures",
	I0408 12:10:21.538954  403745 command_runner.go:130] > # 	"image_pulls_successes",
	I0408 12:10:21.538958  403745 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0408 12:10:21.538964  403745 command_runner.go:130] > # 	"image_layer_reuse",
	I0408 12:10:21.538969  403745 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0408 12:10:21.538975  403745 command_runner.go:130] > # 	"containers_oom_total",
	I0408 12:10:21.538979  403745 command_runner.go:130] > # 	"containers_oom",
	I0408 12:10:21.538985  403745 command_runner.go:130] > # 	"processes_defunct",
	I0408 12:10:21.538995  403745 command_runner.go:130] > # 	"operations_total",
	I0408 12:10:21.539002  403745 command_runner.go:130] > # 	"operations_latency_seconds",
	I0408 12:10:21.539006  403745 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0408 12:10:21.539012  403745 command_runner.go:130] > # 	"operations_errors_total",
	I0408 12:10:21.539016  403745 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0408 12:10:21.539023  403745 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0408 12:10:21.539028  403745 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0408 12:10:21.539034  403745 command_runner.go:130] > # 	"image_pulls_success_total",
	I0408 12:10:21.539038  403745 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0408 12:10:21.539044  403745 command_runner.go:130] > # 	"containers_oom_count_total",
	I0408 12:10:21.539049  403745 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0408 12:10:21.539055  403745 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0408 12:10:21.539058  403745 command_runner.go:130] > # ]
	I0408 12:10:21.539063  403745 command_runner.go:130] > # The port on which the metrics server will listen.
	I0408 12:10:21.539069  403745 command_runner.go:130] > # metrics_port = 9090
	I0408 12:10:21.539074  403745 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0408 12:10:21.539080  403745 command_runner.go:130] > # metrics_socket = ""
	I0408 12:10:21.539085  403745 command_runner.go:130] > # The certificate for the secure metrics server.
	I0408 12:10:21.539092  403745 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0408 12:10:21.539100  403745 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0408 12:10:21.539107  403745 command_runner.go:130] > # certificate on any modification event.
	I0408 12:10:21.539110  403745 command_runner.go:130] > # metrics_cert = ""
	I0408 12:10:21.539115  403745 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0408 12:10:21.539122  403745 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0408 12:10:21.539126  403745 command_runner.go:130] > # metrics_key = ""
	I0408 12:10:21.539134  403745 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0408 12:10:21.539137  403745 command_runner.go:130] > [crio.tracing]
	I0408 12:10:21.539145  403745 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0408 12:10:21.539149  403745 command_runner.go:130] > # enable_tracing = false
	I0408 12:10:21.539157  403745 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0408 12:10:21.539164  403745 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0408 12:10:21.539171  403745 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0408 12:10:21.539178  403745 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0408 12:10:21.539182  403745 command_runner.go:130] > # CRI-O NRI configuration.
	I0408 12:10:21.539185  403745 command_runner.go:130] > [crio.nri]
	I0408 12:10:21.539190  403745 command_runner.go:130] > # Globally enable or disable NRI.
	I0408 12:10:21.539200  403745 command_runner.go:130] > # enable_nri = false
	I0408 12:10:21.539209  403745 command_runner.go:130] > # NRI socket to listen on.
	I0408 12:10:21.539214  403745 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0408 12:10:21.539218  403745 command_runner.go:130] > # NRI plugin directory to use.
	I0408 12:10:21.539225  403745 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0408 12:10:21.539230  403745 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0408 12:10:21.539236  403745 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0408 12:10:21.539242  403745 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0408 12:10:21.539249  403745 command_runner.go:130] > # nri_disable_connections = false
	I0408 12:10:21.539253  403745 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0408 12:10:21.539260  403745 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0408 12:10:21.539265  403745 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0408 12:10:21.539271  403745 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0408 12:10:21.539277  403745 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0408 12:10:21.539283  403745 command_runner.go:130] > [crio.stats]
	I0408 12:10:21.539288  403745 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0408 12:10:21.539293  403745 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0408 12:10:21.539297  403745 command_runner.go:130] > # stats_collection_period = 0
	I0408 12:10:21.539480  403745 cni.go:84] Creating CNI manager for ""
	I0408 12:10:21.539500  403745 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0408 12:10:21.539510  403745 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:10:21.539536  403745 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-830937 NodeName:multinode-830937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:10:21.539705  403745 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-830937"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:10:21.539800  403745 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:10:21.551615  403745 command_runner.go:130] > kubeadm
	I0408 12:10:21.551640  403745 command_runner.go:130] > kubectl
	I0408 12:10:21.551645  403745 command_runner.go:130] > kubelet
	I0408 12:10:21.551670  403745 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:10:21.551747  403745 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:10:21.562567  403745 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0408 12:10:21.580580  403745 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:10:21.598456  403745 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0408 12:10:21.617157  403745 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0408 12:10:21.621378  403745 command_runner.go:130] > 192.168.39.209	control-plane.minikube.internal
	I0408 12:10:21.621517  403745 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:10:21.771888  403745 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:10:21.788375  403745 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937 for IP: 192.168.39.209
	I0408 12:10:21.788412  403745 certs.go:194] generating shared ca certs ...
	I0408 12:10:21.788440  403745 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:10:21.788649  403745 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:10:21.788703  403745 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:10:21.788718  403745 certs.go:256] generating profile certs ...
	I0408 12:10:21.788881  403745 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/client.key
	I0408 12:10:21.788953  403745 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.key.e1ccdead
	I0408 12:10:21.788991  403745 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.key
	I0408 12:10:21.789013  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 12:10:21.789030  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0408 12:10:21.789049  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 12:10:21.789065  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 12:10:21.789083  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 12:10:21.789100  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 12:10:21.789120  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 12:10:21.789137  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 12:10:21.789254  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:10:21.789288  403745 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:10:21.789298  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:10:21.789319  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:10:21.789346  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:10:21.789374  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:10:21.789425  403745 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:10:21.789499  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem -> /usr/share/ca-certificates/375817.pem
	I0408 12:10:21.789523  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> /usr/share/ca-certificates/3758172.pem
	I0408 12:10:21.789536  403745 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:21.790221  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:10:21.817450  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:10:21.844127  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:10:21.870601  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:10:21.896997  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 12:10:21.924343  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:10:21.951881  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:10:21.979743  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/multinode-830937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:10:22.006919  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:10:22.034161  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:10:22.060195  403745 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:10:22.086227  403745 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:10:22.104744  403745 ssh_runner.go:195] Run: openssl version
	I0408 12:10:22.112219  403745 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0408 12:10:22.112455  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:10:22.125306  403745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:22.130108  403745 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:22.130234  403745 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:22.130295  403745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:10:22.136316  403745 command_runner.go:130] > b5213941
	I0408 12:10:22.136399  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:10:22.147046  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:10:22.159512  403745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:10:22.164584  403745 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:10:22.164625  403745 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:10:22.164695  403745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:10:22.170757  403745 command_runner.go:130] > 51391683
	I0408 12:10:22.170908  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:10:22.182193  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:10:22.194932  403745 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:10:22.200266  403745 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:10:22.200349  403745 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:10:22.200428  403745 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:10:22.206662  403745 command_runner.go:130] > 3ec20f2e
	I0408 12:10:22.206780  403745 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:10:22.219241  403745 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:10:22.224078  403745 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:10:22.224108  403745 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0408 12:10:22.224116  403745 command_runner.go:130] > Device: 253,1	Inode: 5245446     Links: 1
	I0408 12:10:22.224125  403745 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0408 12:10:22.224137  403745 command_runner.go:130] > Access: 2024-04-08 12:04:04.066089330 +0000
	I0408 12:10:22.224144  403745 command_runner.go:130] > Modify: 2024-04-08 12:04:04.066089330 +0000
	I0408 12:10:22.224149  403745 command_runner.go:130] > Change: 2024-04-08 12:04:04.066089330 +0000
	I0408 12:10:22.224157  403745 command_runner.go:130] >  Birth: 2024-04-08 12:04:04.066089330 +0000
	I0408 12:10:22.224246  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:10:22.230517  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.230714  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:10:22.236838  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.236912  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:10:22.242789  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.243057  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:10:22.248881  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.249070  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:10:22.254912  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.255004  403745 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:10:22.261127  403745 command_runner.go:130] > Certificate will not expire
	I0408 12:10:22.261250  403745 kubeadm.go:391] StartCluster: {Name:multinode-830937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-830937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.128 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.220 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:10:22.261405  403745 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:10:22.261486  403745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:10:22.299598  403745 command_runner.go:130] > e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76
	I0408 12:10:22.299649  403745 command_runner.go:130] > 5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a
	I0408 12:10:22.299659  403745 command_runner.go:130] > 1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6
	I0408 12:10:22.299667  403745 command_runner.go:130] > da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7
	I0408 12:10:22.299676  403745 command_runner.go:130] > 284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b
	I0408 12:10:22.299694  403745 command_runner.go:130] > 7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b
	I0408 12:10:22.299704  403745 command_runner.go:130] > 7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb
	I0408 12:10:22.299727  403745 command_runner.go:130] > 7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031
	I0408 12:10:22.301159  403745 cri.go:89] found id: "e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76"
	I0408 12:10:22.301183  403745 cri.go:89] found id: "5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a"
	I0408 12:10:22.301189  403745 cri.go:89] found id: "1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6"
	I0408 12:10:22.301194  403745 cri.go:89] found id: "da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7"
	I0408 12:10:22.301198  403745 cri.go:89] found id: "284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b"
	I0408 12:10:22.301207  403745 cri.go:89] found id: "7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b"
	I0408 12:10:22.301211  403745 cri.go:89] found id: "7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb"
	I0408 12:10:22.301214  403745 cri.go:89] found id: "7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031"
	I0408 12:10:22.301218  403745 cri.go:89] found id: ""
	I0408 12:10:22.301282  403745 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.610515740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712578452610486132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bf0fa15-60f5-4bc8-a701-15b68d768c72 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.615672885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc126cf5-aa21-45dc-8f7b-fe63bf42e052 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.615735409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc126cf5-aa21-45dc-8f7b-fe63bf42e052 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.616086334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba346d53555d55e94161e197a28f9d323b18afae027ef01ac96711f822fa1a8c,PodSandboxId:67bea1ad1e2cecd06b6b94089d28efd1d0369b01bf234343a55776911a2a092d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712578263112487534,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3,PodSandboxId:cb6242edc417aff85a9fb79f999da7a17e87e40f1b46388dcdc2aac3c7559dd4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712578229655325640,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256,PodSandboxId:98833232fc637f0840ce9b5b22ca5a58c91a6e691ebf5b3b7ffc5aeffc03d982,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712578229550629505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9e36fb14983c3b277c377038d02ab838dbc72c1157d2e7ca6e094f91ec80c,PodSandboxId:a624408700c2bc406bb43cedb68d2ccea0f0dc6468ec00290a5d567487ab78e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712578229454319577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},A
nnotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783,PodSandboxId:2767413e7d2991e908f6a64db4c823dd3f8972dd4fb8ba196bb66bd7fe92fac0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712578229420885577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a1b-dd394759e88e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f,PodSandboxId:dc93300983e1da070dd99196e98036ce96308458d8283d35f2e22a56dfe1ffa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712578224654214978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernetes.container.hash: d1246064,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7,PodSandboxId:175d44acb00d5a20a95f91b27e3385be0f547566b5204988dc3adbdd280c5766,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712578224624135108,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.container.hash: 2d255
7ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db,PodSandboxId:c2410c8bdb79b8a8ca16b5aa51c7b3c155f011be713493274c1a9ffb44cae134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712578224623108790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8,PodSandboxId:58221f688267581c6b1d329c806bb3484690fa70d25280107d97fb3da0b8d2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712578224532272269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cf7f781e2db2fd91f37195c867e456dbdfe6653bb762c385b9584de1d90207,PodSandboxId:b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712577916208457834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a,PodSandboxId:4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712577869216767226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76,PodSandboxId:71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712577869216893156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6,PodSandboxId:5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712577867736013137,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7,PodSandboxId:e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712577867533782283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a
1b-dd394759e88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b,PodSandboxId:e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712577848107907417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b,PodSandboxId:5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712577848069701306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernete
s.container.hash: d1246064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb,PodSandboxId:10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712577848067509935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.has
h: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031,PodSandboxId:f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712577848037136462,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc126cf5-aa21-45dc-8f7b-fe63bf42e052 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.660803635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9826374-335b-4b64-ac31-53a645fcbb3d name=/runtime.v1.RuntimeService/Version
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.660897968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9826374-335b-4b64-ac31-53a645fcbb3d name=/runtime.v1.RuntimeService/Version
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.662439635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4205b1b-f0e9-4a90-93ff-0eb5cebaa702 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.662919695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712578452662894393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4205b1b-f0e9-4a90-93ff-0eb5cebaa702 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.663557098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=170d2f8f-2bad-4fd4-b3ce-b18e33e6ddbb name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.663665051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=170d2f8f-2bad-4fd4-b3ce-b18e33e6ddbb name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.664020426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba346d53555d55e94161e197a28f9d323b18afae027ef01ac96711f822fa1a8c,PodSandboxId:67bea1ad1e2cecd06b6b94089d28efd1d0369b01bf234343a55776911a2a092d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712578263112487534,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3,PodSandboxId:cb6242edc417aff85a9fb79f999da7a17e87e40f1b46388dcdc2aac3c7559dd4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712578229655325640,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256,PodSandboxId:98833232fc637f0840ce9b5b22ca5a58c91a6e691ebf5b3b7ffc5aeffc03d982,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712578229550629505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9e36fb14983c3b277c377038d02ab838dbc72c1157d2e7ca6e094f91ec80c,PodSandboxId:a624408700c2bc406bb43cedb68d2ccea0f0dc6468ec00290a5d567487ab78e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712578229454319577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},A
nnotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783,PodSandboxId:2767413e7d2991e908f6a64db4c823dd3f8972dd4fb8ba196bb66bd7fe92fac0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712578229420885577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a1b-dd394759e88e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f,PodSandboxId:dc93300983e1da070dd99196e98036ce96308458d8283d35f2e22a56dfe1ffa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712578224654214978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernetes.container.hash: d1246064,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7,PodSandboxId:175d44acb00d5a20a95f91b27e3385be0f547566b5204988dc3adbdd280c5766,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712578224624135108,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.container.hash: 2d255
7ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db,PodSandboxId:c2410c8bdb79b8a8ca16b5aa51c7b3c155f011be713493274c1a9ffb44cae134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712578224623108790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8,PodSandboxId:58221f688267581c6b1d329c806bb3484690fa70d25280107d97fb3da0b8d2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712578224532272269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cf7f781e2db2fd91f37195c867e456dbdfe6653bb762c385b9584de1d90207,PodSandboxId:b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712577916208457834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a,PodSandboxId:4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712577869216767226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76,PodSandboxId:71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712577869216893156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6,PodSandboxId:5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712577867736013137,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7,PodSandboxId:e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712577867533782283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a
1b-dd394759e88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b,PodSandboxId:e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712577848107907417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b,PodSandboxId:5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712577848069701306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernete
s.container.hash: d1246064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb,PodSandboxId:10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712577848067509935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.has
h: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031,PodSandboxId:f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712577848037136462,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=170d2f8f-2bad-4fd4-b3ce-b18e33e6ddbb name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.710959129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bdd3950-fc94-411a-b8f8-200dff1848d4 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.711075624Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bdd3950-fc94-411a-b8f8-200dff1848d4 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.712976809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a881d53-c6e1-45f9-b87d-2ff195bcb1ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.714399009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712578452714371973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a881d53-c6e1-45f9-b87d-2ff195bcb1ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.719352159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8186895b-7f31-44ee-ba3e-9067e2a44653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.719443864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8186895b-7f31-44ee-ba3e-9067e2a44653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.719935485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba346d53555d55e94161e197a28f9d323b18afae027ef01ac96711f822fa1a8c,PodSandboxId:67bea1ad1e2cecd06b6b94089d28efd1d0369b01bf234343a55776911a2a092d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712578263112487534,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3,PodSandboxId:cb6242edc417aff85a9fb79f999da7a17e87e40f1b46388dcdc2aac3c7559dd4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712578229655325640,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256,PodSandboxId:98833232fc637f0840ce9b5b22ca5a58c91a6e691ebf5b3b7ffc5aeffc03d982,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712578229550629505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9e36fb14983c3b277c377038d02ab838dbc72c1157d2e7ca6e094f91ec80c,PodSandboxId:a624408700c2bc406bb43cedb68d2ccea0f0dc6468ec00290a5d567487ab78e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712578229454319577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},A
nnotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783,PodSandboxId:2767413e7d2991e908f6a64db4c823dd3f8972dd4fb8ba196bb66bd7fe92fac0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712578229420885577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a1b-dd394759e88e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f,PodSandboxId:dc93300983e1da070dd99196e98036ce96308458d8283d35f2e22a56dfe1ffa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712578224654214978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernetes.container.hash: d1246064,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7,PodSandboxId:175d44acb00d5a20a95f91b27e3385be0f547566b5204988dc3adbdd280c5766,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712578224624135108,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.container.hash: 2d255
7ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db,PodSandboxId:c2410c8bdb79b8a8ca16b5aa51c7b3c155f011be713493274c1a9ffb44cae134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712578224623108790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8,PodSandboxId:58221f688267581c6b1d329c806bb3484690fa70d25280107d97fb3da0b8d2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712578224532272269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cf7f781e2db2fd91f37195c867e456dbdfe6653bb762c385b9584de1d90207,PodSandboxId:b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712577916208457834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a,PodSandboxId:4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712577869216767226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76,PodSandboxId:71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712577869216893156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6,PodSandboxId:5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712577867736013137,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7,PodSandboxId:e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712577867533782283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a
1b-dd394759e88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b,PodSandboxId:e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712577848107907417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b,PodSandboxId:5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712577848069701306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernete
s.container.hash: d1246064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb,PodSandboxId:10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712577848067509935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.has
h: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031,PodSandboxId:f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712577848037136462,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8186895b-7f31-44ee-ba3e-9067e2a44653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.767494671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e74383f9-cc89-41dd-8ef0-6fe754d99739 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.767630164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e74383f9-cc89-41dd-8ef0-6fe754d99739 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.769141591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d5cd746-4d0e-4594-b3af-5ccdc3bef992 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.769740513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712578452769708271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d5cd746-4d0e-4594-b3af-5ccdc3bef992 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.770473881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cc1a030-5615-420d-85c3-2070a4c91dc3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.770533665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cc1a030-5615-420d-85c3-2070a4c91dc3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:14:12 multinode-830937 crio[2834]: time="2024-04-08 12:14:12.771076306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba346d53555d55e94161e197a28f9d323b18afae027ef01ac96711f822fa1a8c,PodSandboxId:67bea1ad1e2cecd06b6b94089d28efd1d0369b01bf234343a55776911a2a092d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1712578263112487534,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3,PodSandboxId:cb6242edc417aff85a9fb79f999da7a17e87e40f1b46388dcdc2aac3c7559dd4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1712578229655325640,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256,PodSandboxId:98833232fc637f0840ce9b5b22ca5a58c91a6e691ebf5b3b7ffc5aeffc03d982,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712578229550629505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83f9e36fb14983c3b277c377038d02ab838dbc72c1157d2e7ca6e094f91ec80c,PodSandboxId:a624408700c2bc406bb43cedb68d2ccea0f0dc6468ec00290a5d567487ab78e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712578229454319577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},A
nnotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783,PodSandboxId:2767413e7d2991e908f6a64db4c823dd3f8972dd4fb8ba196bb66bd7fe92fac0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712578229420885577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a1b-dd394759e88e,},Annotations:map[string]string{io.k
ubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f,PodSandboxId:dc93300983e1da070dd99196e98036ce96308458d8283d35f2e22a56dfe1ffa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712578224654214978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernetes.container.hash: d1246064,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7,PodSandboxId:175d44acb00d5a20a95f91b27e3385be0f547566b5204988dc3adbdd280c5766,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712578224624135108,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.container.hash: 2d255
7ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db,PodSandboxId:c2410c8bdb79b8a8ca16b5aa51c7b3c155f011be713493274c1a9ffb44cae134,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712578224623108790,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.hash: 4f957258,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8,PodSandboxId:58221f688267581c6b1d329c806bb3484690fa70d25280107d97fb3da0b8d2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712578224532272269,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cf7f781e2db2fd91f37195c867e456dbdfe6653bb762c385b9584de1d90207,PodSandboxId:b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1712577916208457834,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-jn6pk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 384a9c78-7509-45b9-9491-3cff7c3ee650,},Annotations:map[string]string{io.kubernetes.container.hash: bbbde5e3,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bb59aff7adac88bb589a57b42a9df9e461034e28415099def5ac03cd93a989a,PodSandboxId:4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712577869216767226,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a66019d1-fd63-4dd8-8954-c279352fbd0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbdd9b5d,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76,PodSandboxId:71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712577869216893156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5fk5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7258d8-d40d-4304-88ff-dfd2acc388e2,},Annotations:map[string]string{io.kubernetes.container.hash: 36d2b13d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6,PodSandboxId:5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1712577867736013137,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pshn8,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: fe500d50-29e0-48c7-8a7d-c1d7885d7293,},Annotations:map[string]string{io.kubernetes.container.hash: d78489c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7,PodSandboxId:e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1712577867533782283,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm6vx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feeec3e8-e596-4675-9a
1b-dd394759e88e,},Annotations:map[string]string{io.kubernetes.container.hash: 6ec5f91b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b,PodSandboxId:e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1712577848107907417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5b79de58a1de000fdb766c8c2ded58a
,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b,PodSandboxId:5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712577848069701306,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b801b753040f40fcf4d08dd3bf64142,},Annotations:map[string]string{io.kubernete
s.container.hash: d1246064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb,PodSandboxId:10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1712577848067509935,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98343e58f0d1b18f1fef2476b3eb21d6,},Annotations:map[string]string{io.kubernetes.container.has
h: 4f957258,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031,PodSandboxId:f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1712577848037136462,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-830937,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1c18b37da34361164ff4a42a164cf28,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cc1a030-5615-420d-85c3-2070a4c91dc3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ba346d53555d5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   67bea1ad1e2ce       busybox-7fdf7869d9-jn6pk
	ce62d426a1abb       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   cb6242edc417a       kindnet-pshn8
	6b3f97a10dbec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   98833232fc637       coredns-76f75df574-5fk5c
	83f9e36fb1498       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   a624408700c2b       storage-provisioner
	82bbbff4ed64b       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago       Running             kube-proxy                1                   2767413e7d299       kube-proxy-qm6vx
	992f433a96797       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   dc93300983e1d       etcd-multinode-830937
	9c67d2db5cd71       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago       Running             kube-controller-manager   1                   175d44acb00d5       kube-controller-manager-multinode-830937
	5ac245d28d5e6       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago       Running             kube-apiserver            1                   c2410c8bdb79b       kube-apiserver-multinode-830937
	beccf095cd84c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago       Running             kube-scheduler            1                   58221f6882675       kube-scheduler-multinode-830937
	c0cf7f781e2db       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   b093d1301485c       busybox-7fdf7869d9-jn6pk
	e44b4f6b6a25e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   71990dafa93e1       coredns-76f75df574-5fk5c
	5bb59aff7adac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   4d43c58410aee       storage-provisioner
	1e04ca573f33a       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   5a343ad7d660c       kindnet-pshn8
	da9349d66fe24       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      9 minutes ago       Exited              kube-proxy                0                   e48f38f25b4aa       kube-proxy-qm6vx
	284273d5afb07       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      10 minutes ago      Exited              kube-scheduler            0                   e033ff197814a       kube-scheduler-multinode-830937
	7ed59a2a6bedc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   5bd6b29159a09       etcd-multinode-830937
	7e303f2a50cf0       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      10 minutes ago      Exited              kube-apiserver            0                   10c0ab27d62a2       kube-apiserver-multinode-830937
	7e5f832d63815       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      10 minutes ago      Exited              kube-controller-manager   0                   f9f7d9b0fcca8       kube-controller-manager-multinode-830937
	
	
	==> coredns [6b3f97a10dbecd4eb8f34f0fca7d5c7aff4d16670debe8c2de6a4dd3277e7256] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50421 - 389 "HINFO IN 1154849171857597706.7063981779794316596. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007401745s
	
	
	==> coredns [e44b4f6b6a25ecca221a3861cb65bdf93b5a9c1d187a6f2146b638706d650a76] <==
	[INFO] 10.244.0.3:36577 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001786511s
	[INFO] 10.244.0.3:40240 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052431s
	[INFO] 10.244.0.3:35015 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045612s
	[INFO] 10.244.0.3:35469 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001334288s
	[INFO] 10.244.0.3:56058 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081303s
	[INFO] 10.244.0.3:47690 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049309s
	[INFO] 10.244.0.3:41173 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049722s
	[INFO] 10.244.1.2:57122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000306719s
	[INFO] 10.244.1.2:52769 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162088s
	[INFO] 10.244.1.2:37533 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093994s
	[INFO] 10.244.1.2:39965 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075724s
	[INFO] 10.244.0.3:54441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069189s
	[INFO] 10.244.0.3:57561 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000051756s
	[INFO] 10.244.0.3:42008 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000047009s
	[INFO] 10.244.0.3:56901 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045897s
	[INFO] 10.244.1.2:56321 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140528s
	[INFO] 10.244.1.2:45879 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014176s
	[INFO] 10.244.1.2:35317 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134061s
	[INFO] 10.244.1.2:39462 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121502s
	[INFO] 10.244.0.3:60830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080915s
	[INFO] 10.244.0.3:37696 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000037257s
	[INFO] 10.244.0.3:42774 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116028s
	[INFO] 10.244.0.3:57023 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000047469s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-830937
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-830937
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=multinode-830937
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T12_04_14_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:04:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-830937
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 12:14:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 12:10:28 +0000   Mon, 08 Apr 2024 12:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 12:10:28 +0000   Mon, 08 Apr 2024 12:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 12:10:28 +0000   Mon, 08 Apr 2024 12:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 12:10:28 +0000   Mon, 08 Apr 2024 12:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    multinode-830937
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b40595c8482648e0ac686434f4f4a9a5
	  System UUID:                b40595c8-4826-48e0-ac68-6434f4f4a9a5
	  Boot ID:                    367f8949-d58b-4d28-9f83-ad221b18208d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jn6pk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m
	  kube-system                 coredns-76f75df574-5fk5c                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m46s
	  kube-system                 etcd-multinode-830937                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m59s
	  kube-system                 kindnet-pshn8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m47s
	  kube-system                 kube-apiserver-multinode-830937             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                 kube-controller-manager-multinode-830937    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-qm6vx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 kube-scheduler-multinode-830937             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m45s                  kube-proxy       
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m59s                  kubelet          Node multinode-830937 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m59s                  kubelet          Node multinode-830937 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s                  kubelet          Node multinode-830937 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m59s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m48s                  node-controller  Node multinode-830937 event: Registered Node multinode-830937 in Controller
	  Normal  NodeReady                9m45s                  kubelet          Node multinode-830937 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-830937 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-830937 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-830937 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m33s                  node-controller  Node multinode-830937 event: Registered Node multinode-830937 in Controller
	
	
	Name:               multinode-830937-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-830937-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=multinode-830937
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_08T12_11_09_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:11:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-830937-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 12:11:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 08 Apr 2024 12:11:38 +0000   Mon, 08 Apr 2024 12:12:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 08 Apr 2024 12:11:38 +0000   Mon, 08 Apr 2024 12:12:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 08 Apr 2024 12:11:38 +0000   Mon, 08 Apr 2024 12:12:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 08 Apr 2024 12:11:38 +0000   Mon, 08 Apr 2024 12:12:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    multinode-830937-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f602a83f5d244d8a54a893c287d5654
	  System UUID:                6f602a83-f5d2-44d8-a54a-893c287d5654
	  Boot ID:                    39df4e58-c417-4dec-88eb-ccc5c5d887a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-2pf6r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-9pdws               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m12s
	  kube-system                 kube-proxy-rhzzl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  Starting                 9m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m12s (x3 over 9m13s)  kubelet          Node multinode-830937-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s (x3 over 9m13s)  kubelet          Node multinode-830937-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s (x3 over 9m13s)  kubelet          Node multinode-830937-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m3s                   kubelet          Node multinode-830937-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)    kubelet          Node multinode-830937-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)    kubelet          Node multinode-830937-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)    kubelet          Node multinode-830937-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m57s                  kubelet          Node multinode-830937-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-830937-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055861] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052867] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.175089] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.145009] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.282261] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[Apr 8 12:04] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.062538] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.764834] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.632545] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.700045] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.077752] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.099639] systemd-fstab-generator[1467]: Ignoring "noauto" option for root device
	[  +0.129495] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 8 12:05] kauditd_printk_skb: 82 callbacks suppressed
	[Apr 8 12:10] systemd-fstab-generator[2752]: Ignoring "noauto" option for root device
	[  +0.144589] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.181270] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +0.141141] systemd-fstab-generator[2791]: Ignoring "noauto" option for root device
	[  +0.303184] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.768801] systemd-fstab-generator[2917]: Ignoring "noauto" option for root device
	[  +1.936846] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +5.702177] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.020455] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.799258] systemd-fstab-generator[3864]: Ignoring "noauto" option for root device
	[Apr 8 12:11] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7ed59a2a6bedca5f4b2949db2a83a436ab3c2f14170b5a48dcc16bd1fbe1b17b] <==
	{"level":"info","ts":"2024-04-08T12:04:09.370779Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:04:09.370818Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T12:04:09.372529Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T12:04:09.376091Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:04:09.376202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:04:09.381679Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:04:09.390696Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-04-08T12:05:01.025263Z","caller":"traceutil/trace.go:171","msg":"trace[406954701] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"245.840194ms","start":"2024-04-08T12:05:00.779389Z","end":"2024-04-08T12:05:01.025229Z","steps":["trace[406954701] 'process raft request'  (duration: 239.933846ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:05:50.179674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.799499ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6276767450623414734 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-830937-m03.17c44c9262309686\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-830937-m03.17c44c9262309686\" value_size:642 lease:6276767450623414531 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-08T12:05:50.179978Z","caller":"traceutil/trace.go:171","msg":"trace[1481561105] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"194.282408ms","start":"2024-04-08T12:05:49.985664Z","end":"2024-04-08T12:05:50.179947Z","steps":["trace[1481561105] 'process raft request'  (duration: 194.221264ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:05:50.180083Z","caller":"traceutil/trace.go:171","msg":"trace[1307218404] linearizableReadLoop","detail":"{readStateIndex:592; appliedIndex:591; }","duration":"242.423286ms","start":"2024-04-08T12:05:49.937645Z","end":"2024-04-08T12:05:50.180068Z","steps":["trace[1307218404] 'read index received'  (duration: 79.516699ms)","trace[1307218404] 'applied index is now lower than readState.Index'  (duration: 162.905014ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-08T12:05:50.180196Z","caller":"traceutil/trace.go:171","msg":"trace[618306446] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"245.257206ms","start":"2024-04-08T12:05:49.934931Z","end":"2024-04-08T12:05:50.180188Z","steps":["trace[618306446] 'process raft request'  (duration: 82.2215ms)","trace[618306446] 'compare'  (duration: 161.691585ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T12:05:50.180546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"242.895344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-08T12:05:50.182298Z","caller":"traceutil/trace.go:171","msg":"trace[1089261321] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:559; }","duration":"244.710557ms","start":"2024-04-08T12:05:49.937567Z","end":"2024-04-08T12:05:50.182277Z","steps":["trace[1089261321] 'agreement among raft nodes before linearized reading'  (duration: 242.957103ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:05:54.161383Z","caller":"traceutil/trace.go:171","msg":"trace[1465123816] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"188.898528ms","start":"2024-04-08T12:05:53.972415Z","end":"2024-04-08T12:05:54.161313Z","steps":["trace[1465123816] 'process raft request'  (duration: 188.733343ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:08:48.794431Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-08T12:08:48.794564Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-830937","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2024-04-08T12:08:48.794705Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-08T12:08:48.794812Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-08T12:08:48.883902Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-08T12:08:48.884036Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-08T12:08:48.884093Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2024-04-08T12:08:48.887562Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-04-08T12:08:48.887803Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-04-08T12:08:48.887816Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-830937","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	
	==> etcd [992f433a967979a5451e3846e2712e74c86f193d98474b2a9e7bb9eaa59ae77f] <==
	{"level":"info","ts":"2024-04-08T12:10:25.417093Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-08T12:10:25.417105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-08T12:10:25.417353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b switched to configuration voters=(8441320971333687067)"}
	{"level":"info","ts":"2024-04-08T12:10:25.417438Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2024-04-08T12:10:25.417555Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:10:25.417647Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:10:25.435108Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T12:10:25.436123Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-08T12:10:25.436366Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T12:10:25.440715Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-04-08T12:10:25.440759Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-04-08T12:10:27.24721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-08T12:10:27.247253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-08T12:10:27.24729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2024-04-08T12:10:27.247303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2024-04-08T12:10:27.247309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-04-08T12:10:27.247317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2024-04-08T12:10:27.247327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-04-08T12:10:27.256289Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:multinode-830937 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T12:10:27.25629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:10:27.256466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:10:27.256482Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:10:27.256899Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T12:10:27.25842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-04-08T12:10:27.258679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:14:13 up 10 min,  0 users,  load average: 0.49, 0.28, 0.14
	Linux multinode-830937 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1e04ca573f33a4a94c88e527b6e55dea3d936196ca88eeca05c500ca5157ebb6] <==
	I0408 12:08:08.730272       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:08:18.743541       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:08:18.743708       1 main.go:227] handling current node
	I0408 12:08:18.743735       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:08:18.743755       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:08:18.744020       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:08:18.744103       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:08:28.757556       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:08:28.757945       1 main.go:227] handling current node
	I0408 12:08:28.758036       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:08:28.758065       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:08:28.758193       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:08:28.758214       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:08:38.764043       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:08:38.764090       1 main.go:227] handling current node
	I0408 12:08:38.764101       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:08:38.764107       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:08:38.764210       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:08:38.764215       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	I0408 12:08:48.779223       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:08:48.779305       1 main.go:227] handling current node
	I0408 12:08:48.779316       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:08:48.779323       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:08:48.779654       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0408 12:08:48.779684       1 main.go:250] Node multinode-830937-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ce62d426a1abbedde12209c16da349c42d921c7c125d4409c89d25be0ddcedf3] <==
	I0408 12:13:10.737854       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:13:20.755324       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:13:20.755432       1 main.go:227] handling current node
	I0408 12:13:20.755467       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:13:20.755497       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:13:30.794847       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:13:30.794976       1 main.go:227] handling current node
	I0408 12:13:30.794999       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:13:30.795017       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:13:40.810504       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:13:40.810841       1 main.go:227] handling current node
	I0408 12:13:40.810926       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:13:40.810977       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:13:50.821744       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:13:50.821851       1 main.go:227] handling current node
	I0408 12:13:50.821875       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:13:50.821892       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:14:00.837069       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:14:00.837161       1 main.go:227] handling current node
	I0408 12:14:00.837190       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:14:00.837209       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	I0408 12:14:10.842367       1 main.go:223] Handling node with IPs: map[192.168.39.209:{}]
	I0408 12:14:10.842418       1 main.go:227] handling current node
	I0408 12:14:10.842429       1 main.go:223] Handling node with IPs: map[192.168.39.128:{}]
	I0408 12:14:10.842435       1 main.go:250] Node multinode-830937-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5ac245d28d5e672a9f9b6975c10fbb115404bb72b063855e846e8bdca6f0c2db] <==
	I0408 12:10:28.621103       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0408 12:10:28.621113       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0408 12:10:28.621148       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0408 12:10:28.673125       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0408 12:10:28.679835       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0408 12:10:28.680396       1 shared_informer.go:318] Caches are synced for configmaps
	I0408 12:10:28.680881       1 aggregator.go:165] initial CRD sync complete...
	I0408 12:10:28.680917       1 autoregister_controller.go:141] Starting autoregister controller
	I0408 12:10:28.680940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 12:10:28.680963       1 cache.go:39] Caches are synced for autoregister controller
	I0408 12:10:28.698340       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0408 12:10:28.698412       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0408 12:10:28.698424       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0408 12:10:28.740199       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 12:10:28.760900       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0408 12:10:28.769161       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0408 12:10:28.787196       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0408 12:10:29.598812       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 12:10:30.926219       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0408 12:10:31.054361       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0408 12:10:31.071954       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0408 12:10:31.145477       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 12:10:31.153103       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 12:10:41.007992       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 12:10:41.303221       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [7e303f2a50cf03dcc4df54bcf12644de196e65745fadfa901b8795f158ffdbbb] <==
	I0408 12:04:10.998490       1 cache.go:39] Caches are synced for autoregister controller
	I0408 12:04:11.781219       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0408 12:04:11.790190       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0408 12:04:11.790262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 12:04:12.625360       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 12:04:12.676522       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0408 12:04:12.798093       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0408 12:04:12.805217       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.209]
	I0408 12:04:12.806304       1 controller.go:624] quota admission added evaluator for: endpoints
	I0408 12:04:12.813685       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0408 12:04:12.839458       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0408 12:04:14.255217       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0408 12:04:14.270653       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0408 12:04:14.291968       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0408 12:04:26.299550       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0408 12:04:26.406012       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0408 12:08:48.792243       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0408 12:08:48.805999       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0408 12:08:48.807559       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.810957       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.811040       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.811139       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.811160       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.811320       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:08:48.817883       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7e5f832d63815a77ca52174cc3d77105af496e280e7760300460b1758070c031] <==
	I0408 12:05:16.936862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.840326ms"
	I0408 12:05:16.936966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.353µs"
	I0408 12:05:50.181677       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-830937-m03\" does not exist"
	I0408 12:05:50.183953       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:05:50.198464       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-830937-m03" podCIDRs=["10.244.2.0/24"]
	I0408 12:05:50.233120       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-25r2l"
	I0408 12:05:50.235468       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cd659"
	I0408 12:05:50.891335       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-830937-m03"
	I0408 12:05:50.891461       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-830937-m03 event: Registered Node multinode-830937-m03 in Controller"
	I0408 12:06:00.684986       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:06:31.787335       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:06:32.912254       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-830937-m03\" does not exist"
	I0408 12:06:32.913953       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:06:32.940724       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-830937-m03" podCIDRs=["10.244.3.0/24"]
	I0408 12:06:42.018840       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:07:25.948386       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:07:25.950299       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-830937-m03 status is now: NodeNotReady"
	I0408 12:07:25.963277       1 event.go:376] "Event occurred" object="multinode-830937-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-830937-m02 status is now: NodeNotReady"
	I0408 12:07:25.977126       1 event.go:376] "Event occurred" object="kube-system/kindnet-cd659" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:25.988673       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-rhzzl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:25.993398       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-25r2l" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:26.008109       1 event.go:376] "Event occurred" object="kube-system/kindnet-9pdws" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:26.020124       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-522p8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:07:26.028847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="8.424434ms"
	I0408 12:07:26.029715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="53.224µs"
	
	
	==> kube-controller-manager [9c67d2db5cd71be3260bc3284c6bf7b61d6236639ee0d5d15576ad463d0be9e7] <==
	I0408 12:11:16.085832       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:11:16.108278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="84.385µs"
	I0408 12:11:16.126157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="80.644µs"
	I0408 12:11:19.084066       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="10.155919ms"
	I0408 12:11:19.085143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="136.688µs"
	I0408 12:11:21.014763       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2pf6r" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-2pf6r"
	I0408 12:11:35.170554       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:11:36.018889       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-830937-m03 event: Removing Node multinode-830937-m03 from Controller"
	I0408 12:11:36.409667       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:11:36.410207       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-830937-m03\" does not exist"
	I0408 12:11:36.435050       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-830937-m03" podCIDRs=["10.244.2.0/24"]
	I0408 12:11:41.019675       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-830937-m03 event: Registered Node multinode-830937-m03 in Controller"
	I0408 12:11:45.352936       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:11:51.229004       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-830937-m02"
	I0408 12:11:56.034735       1 event.go:376] "Event occurred" object="multinode-830937-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-830937-m03 event: Removing Node multinode-830937-m03 from Controller"
	I0408 12:12:31.052925       1 event.go:376] "Event occurred" object="multinode-830937-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-830937-m02 status is now: NodeNotReady"
	I0408 12:12:31.067819       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-rhzzl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:12:31.081201       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-2pf6r" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:12:31.098299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.593232ms"
	I0408 12:12:31.098508       1 event.go:376] "Event occurred" object="kube-system/kindnet-9pdws" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0408 12:12:31.098792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="40.99µs"
	I0408 12:12:40.986491       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-cd659"
	I0408 12:12:41.015342       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-cd659"
	I0408 12:12:41.015392       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-25r2l"
	I0408 12:12:41.049166       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-25r2l"
	
	
	==> kube-proxy [82bbbff4ed64b189e32ea73a5496cc005d617f4fe75cf1efb8d70e46e2a77783] <==
	I0408 12:10:29.757144       1 server_others.go:72] "Using iptables proxy"
	I0408 12:10:29.797353       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0408 12:10:29.924888       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 12:10:29.924939       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:10:29.924959       1 server_others.go:168] "Using iptables Proxier"
	I0408 12:10:29.928862       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:10:29.929230       1 server.go:865] "Version info" version="v1.29.3"
	I0408 12:10:29.929431       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:10:29.931564       1 config.go:188] "Starting service config controller"
	I0408 12:10:29.931709       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 12:10:29.931761       1 config.go:97] "Starting endpoint slice config controller"
	I0408 12:10:29.931786       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 12:10:29.931978       1 config.go:315] "Starting node config controller"
	I0408 12:10:29.932016       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 12:10:30.032696       1 shared_informer.go:318] Caches are synced for node config
	I0408 12:10:30.032723       1 shared_informer.go:318] Caches are synced for service config
	I0408 12:10:30.032752       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [da9349d66fe240a90072a00d1414a8a156d8cfbe180285cb88b091b321a6b5f7] <==
	I0408 12:04:27.657061       1 server_others.go:72] "Using iptables proxy"
	I0408 12:04:27.670018       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0408 12:04:27.717137       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 12:04:27.717180       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:04:27.717255       1 server_others.go:168] "Using iptables Proxier"
	I0408 12:04:27.720403       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:04:27.720860       1 server.go:865] "Version info" version="v1.29.3"
	I0408 12:04:27.720881       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:04:27.721838       1 config.go:188] "Starting service config controller"
	I0408 12:04:27.721885       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 12:04:27.721910       1 config.go:97] "Starting endpoint slice config controller"
	I0408 12:04:27.721914       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 12:04:27.722451       1 config.go:315] "Starting node config controller"
	I0408 12:04:27.722458       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 12:04:27.822986       1 shared_informer.go:318] Caches are synced for node config
	I0408 12:04:27.823041       1 shared_informer.go:318] Caches are synced for service config
	I0408 12:04:27.823087       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [284273d5afb077cd18572aa493af5542e48230668f292adcd4f8fffdc444c42b] <==
	W0408 12:04:10.959517       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 12:04:10.959735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 12:04:11.784574       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 12:04:11.784677       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:04:11.891130       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 12:04:11.891268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 12:04:11.953668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 12:04:11.953735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 12:04:11.954265       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 12:04:11.954337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 12:04:12.093001       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 12:04:12.093031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 12:04:12.120996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 12:04:12.121043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 12:04:12.149100       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 12:04:12.149173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 12:04:12.202833       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 12:04:12.202894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 12:04:12.264464       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 12:04:12.264496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 12:04:12.270180       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 12:04:12.270266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 12:04:12.372126       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 12:04:12.372270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0408 12:04:14.934437       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [beccf095cd84c5d05e54524d2bb05f02e6d2840e213ecc5a7548b85238fe66d8] <==
	I0408 12:10:25.952262       1 serving.go:380] Generated self-signed cert in-memory
	W0408 12:10:28.633340       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0408 12:10:28.635680       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:10:28.635809       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 12:10:28.635840       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 12:10:28.672666       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0408 12:10:28.672899       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:10:28.680783       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0408 12:10:28.680830       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 12:10:28.683522       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0408 12:10:28.683641       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0408 12:10:28.781240       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 12:12:23 multinode-830937 kubelet[3049]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 12:12:23 multinode-830937 kubelet[3049]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.965012    3049 manager.go:1116] Failed to create existing container: /kubepods/podfe500d50-29e0-48c7-8a7d-c1d7885d7293/crio-5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003: Error finding container 5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003: Status 404 returned error can't find the container with id 5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.965563    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6b801b753040f40fcf4d08dd3bf64142/crio-5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a: Error finding container 5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a: Status 404 returned error can't find the container with id 5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.965933    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda5b79de58a1de000fdb766c8c2ded58a/crio-e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab: Error finding container e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab: Status 404 returned error can't find the container with id e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.966197    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podfeeec3e8-e596-4675-9a1b-dd394759e88e/crio-e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364: Error finding container e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364: Status 404 returned error can't find the container with id e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.966427    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod98343e58f0d1b18f1fef2476b3eb21d6/crio-10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965: Error finding container 10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965: Status 404 returned error can't find the container with id 10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.966704    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6a7258d8-d40d-4304-88ff-dfd2acc388e2/crio-71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55: Error finding container 71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55: Status 404 returned error can't find the container with id 71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.966934    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda66019d1-fd63-4dd8-8954-c279352fbd0b/crio-4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c: Error finding container 4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c: Status 404 returned error can't find the container with id 4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.967170    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb1c18b37da34361164ff4a42a164cf28/crio-f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578: Error finding container f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578: Status 404 returned error can't find the container with id f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578
	Apr 08 12:12:23 multinode-830937 kubelet[3049]: E0408 12:12:23.967367    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod384a9c78-7509-45b9-9491-3cff7c3ee650/crio-b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2: Error finding container b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2: Status 404 returned error can't find the container with id b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.918266    3049 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 12:13:23 multinode-830937 kubelet[3049]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 12:13:23 multinode-830937 kubelet[3049]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 12:13:23 multinode-830937 kubelet[3049]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 12:13:23 multinode-830937 kubelet[3049]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.965028    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/podb1c18b37da34361164ff4a42a164cf28/crio-f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578: Error finding container f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578: Status 404 returned error can't find the container with id f9f7d9b0fcca80abacdd2a6739ba9a33a05700f4a81ade37ec34c240c6130578
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.965463    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda5b79de58a1de000fdb766c8c2ded58a/crio-e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab: Error finding container e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab: Status 404 returned error can't find the container with id e033ff197814a35b7f46cf90c677cbcd19f26b4baff565b23a9b30c84fad10ab
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.965801    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podfeeec3e8-e596-4675-9a1b-dd394759e88e/crio-e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364: Error finding container e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364: Status 404 returned error can't find the container with id e48f38f25b4aaf1c5f242231f4c9dfdf4522f121530d9581959741c700fbf364
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.966037    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda66019d1-fd63-4dd8-8954-c279352fbd0b/crio-4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c: Error finding container 4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c: Status 404 returned error can't find the container with id 4d43c58410aeeb8d92d99471959a28cdf15edbbe7b1efa4861449a402e60f72c
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.966310    3049 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod384a9c78-7509-45b9-9491-3cff7c3ee650/crio-b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2: Error finding container b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2: Status 404 returned error can't find the container with id b093d1301485c81b8eb8e5711d23e7a82c36abd4a6f7262e062635a5902b0bd2
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.966549    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod98343e58f0d1b18f1fef2476b3eb21d6/crio-10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965: Error finding container 10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965: Status 404 returned error can't find the container with id 10c0ab27d62a2d363822ac246db7a8eaa222a1c4cdd1f86881a07b254ddda965
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.966782    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6b801b753040f40fcf4d08dd3bf64142/crio-5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a: Error finding container 5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a: Status 404 returned error can't find the container with id 5bd6b29159a099c3ff6a19ef17c06f493080cf423cf9ee08c456e72cc422f07a
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.967065    3049 manager.go:1116] Failed to create existing container: /kubepods/podfe500d50-29e0-48c7-8a7d-c1d7885d7293/crio-5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003: Error finding container 5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003: Status 404 returned error can't find the container with id 5a343ad7d660c1a61dd9a5b7fc1ecb58a1205bf3c00614eb72dd1a8616805003
	Apr 08 12:13:23 multinode-830937 kubelet[3049]: E0408 12:13:23.967379    3049 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod6a7258d8-d40d-4304-88ff-dfd2acc388e2/crio-71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55: Error finding container 71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55: Status 404 returned error can't find the container with id 71990dafa93e1a8a9a16c1437030255fcb11bfef84067af04be16033cfe62b55
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:14:12.315768  405599 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18588-368424/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-830937 -n multinode-830937
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-830937 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.64s)

                                                
                                    
x
+
TestPreload (316.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-834289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0408 12:18:06.833530  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 12:18:44.543936  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-834289 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m53.627685704s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-834289 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-834289 image pull gcr.io/k8s-minikube/busybox: (2.83205947s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-834289
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-834289: exit status 82 (2m0.505780525s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-834289"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-834289 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-08 12:22:51.771964267 +0000 UTC m=+3749.558646506
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-834289 -n test-preload-834289
E0408 12:23:06.833013  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-834289 -n test-preload-834289: exit status 3 (18.521390508s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:23:10.288085  409098 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host
	E0408 12:23:10.288108  409098 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.82:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-834289" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-834289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-834289
--- FAIL: TestPreload (316.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (420.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m59.613957369s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-144569] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-144569" primary control-plane node in "kubernetes-upgrade-144569" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:25:07.498053  410199 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:25:07.498370  410199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:25:07.498382  410199 out.go:304] Setting ErrFile to fd 2...
	I0408 12:25:07.498390  410199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:25:07.498987  410199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:25:07.500042  410199 out.go:298] Setting JSON to false
	I0408 12:25:07.501403  410199 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7651,"bootTime":1712571457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:25:07.501499  410199 start.go:139] virtualization: kvm guest
	I0408 12:25:07.503367  410199 out.go:177] * [kubernetes-upgrade-144569] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:25:07.506519  410199 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:25:07.505147  410199 notify.go:220] Checking for updates...
	I0408 12:25:07.509254  410199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:25:07.512342  410199 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:25:07.515182  410199 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:25:07.516683  410199 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:25:07.520231  410199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:25:07.522064  410199 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:25:07.561090  410199 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 12:25:07.562561  410199 start.go:297] selected driver: kvm2
	I0408 12:25:07.562586  410199 start.go:901] validating driver "kvm2" against <nil>
	I0408 12:25:07.562601  410199 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:25:07.563655  410199 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:25:07.579971  410199 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:25:07.596492  410199 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:25:07.596548  410199 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 12:25:07.596782  410199 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 12:25:07.596856  410199 cni.go:84] Creating CNI manager for ""
	I0408 12:25:07.596874  410199 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:25:07.596884  410199 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 12:25:07.596959  410199 start.go:340] cluster config:
	{Name:kubernetes-upgrade-144569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-144569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:25:07.597074  410199 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:25:07.598915  410199 out.go:177] * Starting "kubernetes-upgrade-144569" primary control-plane node in "kubernetes-upgrade-144569" cluster
	I0408 12:25:07.600260  410199 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:25:07.600311  410199 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:25:07.600326  410199 cache.go:56] Caching tarball of preloaded images
	I0408 12:25:07.600436  410199 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:25:07.600447  410199 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:25:07.600818  410199 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/config.json ...
	I0408 12:25:07.600850  410199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/config.json: {Name:mk73e50c043d3c6264b828655089c88c28eff35f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:25:07.601042  410199 start.go:360] acquireMachinesLock for kubernetes-upgrade-144569: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:25:33.873111  410199 start.go:364] duration metric: took 26.271982073s to acquireMachinesLock for "kubernetes-upgrade-144569"
	I0408 12:25:33.873198  410199 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-144569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-144569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:25:33.873381  410199 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 12:25:33.876875  410199 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 12:25:33.877129  410199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:25:33.877179  410199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:25:33.894096  410199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I0408 12:25:33.894505  410199 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:25:33.895075  410199 main.go:141] libmachine: Using API Version  1
	I0408 12:25:33.895102  410199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:25:33.895447  410199 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:25:33.895697  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetMachineName
	I0408 12:25:33.895851  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:25:33.895993  410199 start.go:159] libmachine.API.Create for "kubernetes-upgrade-144569" (driver="kvm2")
	I0408 12:25:33.896024  410199 client.go:168] LocalClient.Create starting
	I0408 12:25:33.896062  410199 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 12:25:33.896095  410199 main.go:141] libmachine: Decoding PEM data...
	I0408 12:25:33.896113  410199 main.go:141] libmachine: Parsing certificate...
	I0408 12:25:33.896174  410199 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 12:25:33.896194  410199 main.go:141] libmachine: Decoding PEM data...
	I0408 12:25:33.896207  410199 main.go:141] libmachine: Parsing certificate...
	I0408 12:25:33.896220  410199 main.go:141] libmachine: Running pre-create checks...
	I0408 12:25:33.896226  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .PreCreateCheck
	I0408 12:25:33.896691  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetConfigRaw
	I0408 12:25:33.897189  410199 main.go:141] libmachine: Creating machine...
	I0408 12:25:33.897209  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .Create
	I0408 12:25:33.897338  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Creating KVM machine...
	I0408 12:25:33.898548  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found existing default KVM network
	I0408 12:25:33.899418  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:33.899263  410561 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f7:39:54} reservation:<nil>}
	I0408 12:25:33.900113  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:33.900034  410561 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000256330}
	I0408 12:25:33.900152  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | created network xml: 
	I0408 12:25:33.900172  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | <network>
	I0408 12:25:33.900187  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |   <name>mk-kubernetes-upgrade-144569</name>
	I0408 12:25:33.900200  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |   <dns enable='no'/>
	I0408 12:25:33.900210  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |   
	I0408 12:25:33.900221  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0408 12:25:33.900240  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |     <dhcp>
	I0408 12:25:33.900252  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0408 12:25:33.900264  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |     </dhcp>
	I0408 12:25:33.900273  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |   </ip>
	I0408 12:25:33.900281  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG |   
	I0408 12:25:33.900291  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | </network>
	I0408 12:25:33.900302  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | 
	I0408 12:25:33.905754  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | trying to create private KVM network mk-kubernetes-upgrade-144569 192.168.50.0/24...
	I0408 12:25:33.979029  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | private KVM network mk-kubernetes-upgrade-144569 192.168.50.0/24 created
	I0408 12:25:33.979072  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:33.978962  410561 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:25:33.979090  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569 ...
	I0408 12:25:33.979113  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 12:25:33.979131  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 12:25:34.221467  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:34.221285  410561 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa...
	I0408 12:25:34.481707  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:34.481533  410561 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/kubernetes-upgrade-144569.rawdisk...
	I0408 12:25:34.481749  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Writing magic tar header
	I0408 12:25:34.481768  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Writing SSH key tar header
	I0408 12:25:34.481779  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:34.481659  410561 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569 ...
	I0408 12:25:34.481800  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569
	I0408 12:25:34.481869  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569 (perms=drwx------)
	I0408 12:25:34.481895  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 12:25:34.481906  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 12:25:34.481946  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 12:25:34.481961  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 12:25:34.481978  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 12:25:34.481991  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:25:34.482002  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 12:25:34.482018  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Creating domain...
	I0408 12:25:34.482029  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 12:25:34.482045  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 12:25:34.482056  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Checking permissions on dir: /home/jenkins
	I0408 12:25:34.482069  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Checking permissions on dir: /home
	I0408 12:25:34.482098  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Skipping /home - not owner
	I0408 12:25:34.483411  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) define libvirt domain using xml: 
	I0408 12:25:34.483494  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) <domain type='kvm'>
	I0408 12:25:34.483513  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   <name>kubernetes-upgrade-144569</name>
	I0408 12:25:34.483531  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   <memory unit='MiB'>2200</memory>
	I0408 12:25:34.483575  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   <vcpu>2</vcpu>
	I0408 12:25:34.483598  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   <features>
	I0408 12:25:34.483639  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <acpi/>
	I0408 12:25:34.483682  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <apic/>
	I0408 12:25:34.483722  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <pae/>
	I0408 12:25:34.483738  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     
	I0408 12:25:34.483750  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   </features>
	I0408 12:25:34.483765  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   <cpu mode='host-passthrough'>
	I0408 12:25:34.483772  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   
	I0408 12:25:34.483784  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   </cpu>
	I0408 12:25:34.483794  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   <os>
	I0408 12:25:34.483812  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <type>hvm</type>
	I0408 12:25:34.483831  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <boot dev='cdrom'/>
	I0408 12:25:34.483844  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <boot dev='hd'/>
	I0408 12:25:34.483856  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <bootmenu enable='no'/>
	I0408 12:25:34.483879  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   </os>
	I0408 12:25:34.483888  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   <devices>
	I0408 12:25:34.483900  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <disk type='file' device='cdrom'>
	I0408 12:25:34.483916  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/boot2docker.iso'/>
	I0408 12:25:34.483932  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <target dev='hdc' bus='scsi'/>
	I0408 12:25:34.483951  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <readonly/>
	I0408 12:25:34.483978  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     </disk>
	I0408 12:25:34.483993  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <disk type='file' device='disk'>
	I0408 12:25:34.484004  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 12:25:34.484033  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/kubernetes-upgrade-144569.rawdisk'/>
	I0408 12:25:34.484052  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <target dev='hda' bus='virtio'/>
	I0408 12:25:34.484060  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     </disk>
	I0408 12:25:34.484076  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <interface type='network'>
	I0408 12:25:34.484097  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <source network='mk-kubernetes-upgrade-144569'/>
	I0408 12:25:34.484110  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <model type='virtio'/>
	I0408 12:25:34.484121  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     </interface>
	I0408 12:25:34.484134  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <interface type='network'>
	I0408 12:25:34.484146  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <source network='default'/>
	I0408 12:25:34.484170  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <model type='virtio'/>
	I0408 12:25:34.484190  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     </interface>
	I0408 12:25:34.484204  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <serial type='pty'>
	I0408 12:25:34.484216  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <target port='0'/>
	I0408 12:25:34.484227  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     </serial>
	I0408 12:25:34.484239  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <console type='pty'>
	I0408 12:25:34.484260  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <target type='serial' port='0'/>
	I0408 12:25:34.484277  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     </console>
	I0408 12:25:34.484298  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     <rng model='virtio'>
	I0408 12:25:34.484311  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)       <backend model='random'>/dev/random</backend>
	I0408 12:25:34.484325  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     </rng>
	I0408 12:25:34.484337  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     
	I0408 12:25:34.484345  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)     
	I0408 12:25:34.484359  410199 main.go:141] libmachine: (kubernetes-upgrade-144569)   </devices>
	I0408 12:25:34.484369  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) </domain>
	I0408 12:25:34.484385  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) 
	I0408 12:25:34.489356  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:64:69:dd in network default
	I0408 12:25:34.490088  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Ensuring networks are active...
	I0408 12:25:34.490108  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:34.490989  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Ensuring network default is active
	I0408 12:25:34.491373  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Ensuring network mk-kubernetes-upgrade-144569 is active
	I0408 12:25:34.491990  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Getting domain xml...
	I0408 12:25:34.492824  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Creating domain...
	I0408 12:25:35.772249  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Waiting to get IP...
	I0408 12:25:35.773215  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:35.773683  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:35.773717  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:35.773655  410561 retry.go:31] will retry after 262.208881ms: waiting for machine to come up
	I0408 12:25:36.037264  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:36.037826  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:36.037854  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:36.037735  410561 retry.go:31] will retry after 368.009225ms: waiting for machine to come up
	I0408 12:25:36.407530  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:36.408044  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:36.408088  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:36.407988  410561 retry.go:31] will retry after 368.661589ms: waiting for machine to come up
	I0408 12:25:36.778645  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:36.779192  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:36.779223  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:36.779130  410561 retry.go:31] will retry after 577.484227ms: waiting for machine to come up
	I0408 12:25:37.358554  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:37.359025  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:37.359049  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:37.358985  410561 retry.go:31] will retry after 492.875024ms: waiting for machine to come up
	I0408 12:25:37.853848  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:37.854398  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:37.854427  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:37.854338  410561 retry.go:31] will retry after 936.669778ms: waiting for machine to come up
	I0408 12:25:38.793155  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:38.793587  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:38.793613  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:38.793517  410561 retry.go:31] will retry after 771.160669ms: waiting for machine to come up
	I0408 12:25:39.565928  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:39.566484  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:39.566522  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:39.566428  410561 retry.go:31] will retry after 933.268438ms: waiting for machine to come up
	I0408 12:25:40.501806  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:40.502333  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:40.502355  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:40.502286  410561 retry.go:31] will retry after 1.812352654s: waiting for machine to come up
	I0408 12:25:42.316749  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:42.317348  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:42.317409  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:42.317274  410561 retry.go:31] will retry after 1.800556979s: waiting for machine to come up
	I0408 12:25:44.119250  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:44.119672  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:44.119711  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:44.119595  410561 retry.go:31] will retry after 2.223643727s: waiting for machine to come up
	I0408 12:25:46.345022  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:46.345543  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:46.345597  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:46.345486  410561 retry.go:31] will retry after 2.621800903s: waiting for machine to come up
	I0408 12:25:48.968797  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:48.969319  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:48.969355  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:48.969259  410561 retry.go:31] will retry after 3.688646053s: waiting for machine to come up
	I0408 12:25:52.662151  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:52.662663  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find current IP address of domain kubernetes-upgrade-144569 in network mk-kubernetes-upgrade-144569
	I0408 12:25:52.662698  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | I0408 12:25:52.662595  410561 retry.go:31] will retry after 4.700759727s: waiting for machine to come up
	I0408 12:25:57.367381  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.367962  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Found IP for machine: 192.168.50.62
	I0408 12:25:57.367987  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Reserving static IP address...
	I0408 12:25:57.367997  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has current primary IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.368490  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-144569", mac: "52:54:00:c1:e9:e6", ip: "192.168.50.62"} in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.449235  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Getting to WaitForSSH function...
	I0408 12:25:57.449273  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Reserved static IP address: 192.168.50.62
	I0408 12:25:57.449290  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Waiting for SSH to be available...
	I0408 12:25:57.452202  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.452641  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:57.452671  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.452807  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Using SSH client type: external
	I0408 12:25:57.452844  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa (-rw-------)
	I0408 12:25:57.452902  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:25:57.452920  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | About to run SSH command:
	I0408 12:25:57.452941  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | exit 0
	I0408 12:25:57.580520  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | SSH cmd err, output: <nil>: 
	I0408 12:25:57.580809  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) KVM machine creation complete!
	I0408 12:25:57.581199  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetConfigRaw
	I0408 12:25:57.581815  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:25:57.582058  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:25:57.582267  410199 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 12:25:57.582287  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetState
	I0408 12:25:57.583654  410199 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 12:25:57.583709  410199 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 12:25:57.583724  410199 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 12:25:57.583733  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:57.586095  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.586443  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:57.586475  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.586596  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:57.586779  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:57.586933  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:57.587092  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:57.587243  410199 main.go:141] libmachine: Using SSH client type: native
	I0408 12:25:57.587442  410199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0408 12:25:57.587453  410199 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 12:25:57.695660  410199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:25:57.695745  410199 main.go:141] libmachine: Detecting the provisioner...
	I0408 12:25:57.695757  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:57.699081  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.699380  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:57.699410  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.699772  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:57.700022  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:57.700184  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:57.700324  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:57.700551  410199 main.go:141] libmachine: Using SSH client type: native
	I0408 12:25:57.700739  410199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0408 12:25:57.700753  410199 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 12:25:57.804843  410199 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 12:25:57.804972  410199 main.go:141] libmachine: found compatible host: buildroot
	I0408 12:25:57.804987  410199 main.go:141] libmachine: Provisioning with buildroot...
	I0408 12:25:57.804998  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetMachineName
	I0408 12:25:57.805283  410199 buildroot.go:166] provisioning hostname "kubernetes-upgrade-144569"
	I0408 12:25:57.805313  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetMachineName
	I0408 12:25:57.805505  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:57.807655  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.807999  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:57.808032  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.808243  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:57.808430  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:57.808560  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:57.808714  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:57.808915  410199 main.go:141] libmachine: Using SSH client type: native
	I0408 12:25:57.809161  410199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0408 12:25:57.809183  410199 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-144569 && echo "kubernetes-upgrade-144569" | sudo tee /etc/hostname
	I0408 12:25:57.926649  410199 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-144569
	
	I0408 12:25:57.926683  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:57.929729  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.930134  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:57.930182  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:57.930356  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:57.930563  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:57.930732  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:57.930917  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:57.931109  410199 main.go:141] libmachine: Using SSH client type: native
	I0408 12:25:57.931288  410199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0408 12:25:57.931306  410199 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-144569' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-144569/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-144569' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:25:58.045973  410199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:25:58.046008  410199 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:25:58.046026  410199 buildroot.go:174] setting up certificates
	I0408 12:25:58.046038  410199 provision.go:84] configureAuth start
	I0408 12:25:58.046047  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetMachineName
	I0408 12:25:58.046361  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetIP
	I0408 12:25:58.050073  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.050522  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.050565  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.050727  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:58.053384  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.053736  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.053766  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.053887  410199 provision.go:143] copyHostCerts
	I0408 12:25:58.053952  410199 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:25:58.053972  410199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:25:58.054026  410199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:25:58.054123  410199 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:25:58.054131  410199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:25:58.054151  410199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:25:58.054219  410199 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:25:58.054227  410199 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:25:58.054243  410199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:25:58.054300  410199 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-144569 san=[127.0.0.1 192.168.50.62 kubernetes-upgrade-144569 localhost minikube]
	I0408 12:25:58.168504  410199 provision.go:177] copyRemoteCerts
	I0408 12:25:58.168570  410199 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:25:58.168597  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:58.171755  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.172110  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.172163  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.172359  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:58.172578  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:58.172709  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:58.172854  410199 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa Username:docker}
	I0408 12:25:58.254490  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:25:58.281907  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0408 12:25:58.308277  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 12:25:58.334618  410199 provision.go:87] duration metric: took 288.56774ms to configureAuth
	I0408 12:25:58.334652  410199 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:25:58.334875  410199 config.go:182] Loaded profile config "kubernetes-upgrade-144569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:25:58.334981  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:58.337937  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.338340  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.338373  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.338549  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:58.338768  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:58.338932  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:58.339071  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:58.339254  410199 main.go:141] libmachine: Using SSH client type: native
	I0408 12:25:58.339429  410199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0408 12:25:58.339444  410199 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:25:58.618390  410199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:25:58.618419  410199 main.go:141] libmachine: Checking connection to Docker...
	I0408 12:25:58.618454  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetURL
	I0408 12:25:58.620167  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Using libvirt version 6000000
	I0408 12:25:58.622558  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.622928  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.622971  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.623083  410199 main.go:141] libmachine: Docker is up and running!
	I0408 12:25:58.623103  410199 main.go:141] libmachine: Reticulating splines...
	I0408 12:25:58.623110  410199 client.go:171] duration metric: took 24.727078218s to LocalClient.Create
	I0408 12:25:58.623137  410199 start.go:167] duration metric: took 24.727144689s to libmachine.API.Create "kubernetes-upgrade-144569"
	I0408 12:25:58.623153  410199 start.go:293] postStartSetup for "kubernetes-upgrade-144569" (driver="kvm2")
	I0408 12:25:58.623178  410199 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:25:58.623215  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:25:58.623466  410199 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:25:58.623485  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:58.625703  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.626040  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.626071  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.626232  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:58.626427  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:58.626582  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:58.626733  410199 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa Username:docker}
	I0408 12:25:58.711617  410199 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:25:58.718152  410199 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:25:58.718181  410199 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:25:58.718258  410199 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:25:58.718349  410199 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:25:58.718474  410199 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:25:58.729241  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:25:58.754320  410199 start.go:296] duration metric: took 131.142254ms for postStartSetup
	I0408 12:25:58.754392  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetConfigRaw
	I0408 12:25:58.755031  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetIP
	I0408 12:25:58.757831  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.758202  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.758237  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.758434  410199 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/config.json ...
	I0408 12:25:58.758674  410199 start.go:128] duration metric: took 24.885279482s to createHost
	I0408 12:25:58.758706  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:58.761151  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.761616  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.761661  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.761871  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:58.762067  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:58.762226  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:58.762360  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:58.762586  410199 main.go:141] libmachine: Using SSH client type: native
	I0408 12:25:58.762812  410199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0408 12:25:58.762825  410199 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 12:25:58.868966  410199 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712579158.851225966
	
	I0408 12:25:58.868993  410199 fix.go:216] guest clock: 1712579158.851225966
	I0408 12:25:58.869002  410199 fix.go:229] Guest: 2024-04-08 12:25:58.851225966 +0000 UTC Remote: 2024-04-08 12:25:58.758692271 +0000 UTC m=+51.333748530 (delta=92.533695ms)
	I0408 12:25:58.869022  410199 fix.go:200] guest clock delta is within tolerance: 92.533695ms
	I0408 12:25:58.869034  410199 start.go:83] releasing machines lock for "kubernetes-upgrade-144569", held for 24.995874519s
	I0408 12:25:58.869066  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:25:58.869387  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetIP
	I0408 12:25:58.872326  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.872763  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.872810  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.872990  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:25:58.873583  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:25:58.873804  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:25:58.873920  410199 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:25:58.873963  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:58.874077  410199 ssh_runner.go:195] Run: cat /version.json
	I0408 12:25:58.874105  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:25:58.877014  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.877250  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.877416  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.877447  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.877528  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:25:58.877558  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:25:58.877613  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:58.877825  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:58.877884  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:25:58.877991  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:58.878083  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:25:58.878155  410199 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa Username:docker}
	I0408 12:25:58.878229  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:25:58.878407  410199 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa Username:docker}
	I0408 12:25:58.998834  410199 ssh_runner.go:195] Run: systemctl --version
	I0408 12:25:59.005687  410199 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:25:59.180811  410199 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:25:59.188358  410199 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:25:59.188440  410199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:25:59.205735  410199 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:25:59.205779  410199 start.go:494] detecting cgroup driver to use...
	I0408 12:25:59.205838  410199 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:25:59.222695  410199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:25:59.237790  410199 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:25:59.237844  410199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:25:59.253324  410199 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:25:59.268290  410199 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:25:59.395079  410199 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:25:59.558532  410199 docker.go:233] disabling docker service ...
	I0408 12:25:59.558613  410199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:25:59.574291  410199 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:25:59.587743  410199 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:25:59.734006  410199 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:25:59.872249  410199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:25:59.887923  410199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:25:59.909910  410199 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:25:59.909990  410199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:25:59.922480  410199 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:25:59.922564  410199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:25:59.935588  410199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:25:59.949482  410199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:25:59.961955  410199 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:25:59.978143  410199 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:25:59.992364  410199 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:25:59.992447  410199 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:26:00.010343  410199 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:26:00.022166  410199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:26:00.191719  410199 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:26:00.354133  410199 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:26:00.354236  410199 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:26:00.359703  410199 start.go:562] Will wait 60s for crictl version
	I0408 12:26:00.359766  410199 ssh_runner.go:195] Run: which crictl
	I0408 12:26:00.364344  410199 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:26:00.407325  410199 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:26:00.407402  410199 ssh_runner.go:195] Run: crio --version
	I0408 12:26:00.441054  410199 ssh_runner.go:195] Run: crio --version
	I0408 12:26:00.476669  410199 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:26:00.478283  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetIP
	I0408 12:26:00.482051  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:26:00.482436  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:26:00.482470  410199 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:26:00.482771  410199 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 12:26:00.487794  410199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:26:00.504144  410199 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-144569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-144569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:26:00.504327  410199 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:26:00.504413  410199 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:26:00.545049  410199 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:26:00.545128  410199 ssh_runner.go:195] Run: which lz4
	I0408 12:26:00.550260  410199 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 12:26:00.555605  410199 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:26:00.555660  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:26:02.575439  410199 crio.go:462] duration metric: took 2.025236771s to copy over tarball
	I0408 12:26:02.575571  410199 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:26:05.486938  410199 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.911329728s)
	I0408 12:26:05.486979  410199 crio.go:469] duration metric: took 2.911505437s to extract the tarball
	I0408 12:26:05.486987  410199 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:26:05.530713  410199 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:26:05.593484  410199 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:26:05.593522  410199 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:26:05.593722  410199 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:26:05.593753  410199 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:26:05.593805  410199 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:26:05.593761  410199 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:26:05.593811  410199 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:26:05.593900  410199 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:26:05.593721  410199 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:26:05.594042  410199 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:26:05.595874  410199 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:26:05.595888  410199 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:26:05.595876  410199 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:26:05.595875  410199 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:26:05.595952  410199 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:26:05.595991  410199 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:26:05.596061  410199 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:26:05.596630  410199 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:26:05.796310  410199 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:26:05.825825  410199 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:26:05.846780  410199 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:26:05.846830  410199 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:26:05.846880  410199 ssh_runner.go:195] Run: which crictl
	I0408 12:26:05.869802  410199 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:26:05.885505  410199 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:26:05.889286  410199 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:26:05.889387  410199 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:26:05.889469  410199 ssh_runner.go:195] Run: which crictl
	I0408 12:26:05.889397  410199 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:26:05.892597  410199 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:26:05.894212  410199 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:26:05.916063  410199 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:26:05.968177  410199 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:26:05.968235  410199 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:26:05.968292  410199 ssh_runner.go:195] Run: which crictl
	I0408 12:26:06.028576  410199 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:26:06.028639  410199 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:26:06.028671  410199 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:26:06.028697  410199 ssh_runner.go:195] Run: which crictl
	I0408 12:26:06.062522  410199 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:26:06.078240  410199 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:26:06.078291  410199 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:26:06.078315  410199 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:26:06.078354  410199 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:26:06.078374  410199 ssh_runner.go:195] Run: which crictl
	I0408 12:26:06.078403  410199 ssh_runner.go:195] Run: which crictl
	I0408 12:26:06.078386  410199 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:26:06.078454  410199 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:26:06.078471  410199 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:26:06.078504  410199 ssh_runner.go:195] Run: which crictl
	I0408 12:26:06.078434  410199 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:26:06.130892  410199 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:26:06.130951  410199 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:26:06.131201  410199 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:26:06.171851  410199 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:26:06.180056  410199 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:26:06.180100  410199 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:26:06.226865  410199 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:26:06.243343  410199 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:26:06.264595  410199 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:26:06.396034  410199 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:26:06.551598  410199 cache_images.go:92] duration metric: took 958.056827ms to LoadCachedImages
	W0408 12:26:06.551729  410199 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0408 12:26:06.551753  410199 kubeadm.go:928] updating node { 192.168.50.62 8443 v1.20.0 crio true true} ...
	I0408 12:26:06.551885  410199 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-144569 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-144569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:26:06.551980  410199 ssh_runner.go:195] Run: crio config
	I0408 12:26:06.607231  410199 cni.go:84] Creating CNI manager for ""
	I0408 12:26:06.607263  410199 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:26:06.607281  410199 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:26:06.607367  410199 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-144569 NodeName:kubernetes-upgrade-144569 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:26:06.607646  410199 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-144569"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:26:06.607741  410199 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:26:06.619746  410199 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:26:06.619853  410199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:26:06.633294  410199 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0408 12:26:06.652077  410199 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:26:06.672993  410199 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:26:06.693516  410199 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I0408 12:26:06.697987  410199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:26:06.712664  410199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:26:06.850523  410199 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:26:06.871214  410199 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569 for IP: 192.168.50.62
	I0408 12:26:06.871245  410199 certs.go:194] generating shared ca certs ...
	I0408 12:26:06.871263  410199 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:26:06.871464  410199 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:26:06.871532  410199 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:26:06.871548  410199 certs.go:256] generating profile certs ...
	I0408 12:26:06.871629  410199 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/client.key
	I0408 12:26:06.871647  410199 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/client.crt with IP's: []
	I0408 12:26:07.119525  410199 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/client.crt ...
	I0408 12:26:07.119562  410199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/client.crt: {Name:mk29d92f7feb6b6289f772930d7315f65857b6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:26:07.119764  410199 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/client.key ...
	I0408 12:26:07.119781  410199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/client.key: {Name:mk98fb0d9229d5e4ebd5d8060b2ca41bf4c7a801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:26:07.119871  410199 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.key.12538e6b
	I0408 12:26:07.119889  410199 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.crt.12538e6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.62]
	I0408 12:26:07.339911  410199 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.crt.12538e6b ...
	I0408 12:26:07.339951  410199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.crt.12538e6b: {Name:mk04bbd40312122cb60cb627832c4262e24904d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:26:07.340125  410199 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.key.12538e6b ...
	I0408 12:26:07.340140  410199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.key.12538e6b: {Name:mk5386e791d921d7a016714a23fc10ddc1947e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:26:07.340207  410199 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.crt.12538e6b -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.crt
	I0408 12:26:07.340291  410199 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.key.12538e6b -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.key
	I0408 12:26:07.340341  410199 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/proxy-client.key
	I0408 12:26:07.340356  410199 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/proxy-client.crt with IP's: []
	I0408 12:26:07.442383  410199 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/proxy-client.crt ...
	I0408 12:26:07.442416  410199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/proxy-client.crt: {Name:mk9010a0469e8b5cdd9b699ae5e7bf71094a0194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:26:07.442582  410199 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/proxy-client.key ...
	I0408 12:26:07.442597  410199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/proxy-client.key: {Name:mkd04a396b1a21346fb771bcbf4333c8f3edf99c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:26:07.442748  410199 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:26:07.442784  410199 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:26:07.442794  410199 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:26:07.442813  410199 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:26:07.442834  410199 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:26:07.442859  410199 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:26:07.442897  410199 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:26:07.444633  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:26:07.475242  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:26:07.505395  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:26:07.681959  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:26:07.720770  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0408 12:26:07.752733  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:26:07.784797  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:26:07.833084  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:26:07.863930  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:26:07.894342  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:26:07.925496  410199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:26:07.954379  410199 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:26:07.974636  410199 ssh_runner.go:195] Run: openssl version
	I0408 12:26:07.981153  410199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:26:07.995555  410199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:26:08.001380  410199 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:26:08.001461  410199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:26:08.008206  410199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:26:08.021237  410199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:26:08.034139  410199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:26:08.039658  410199 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:26:08.039766  410199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:26:08.046321  410199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:26:08.059363  410199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:26:08.073161  410199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:26:08.078591  410199 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:26:08.078659  410199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:26:08.085199  410199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:26:08.097808  410199 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:26:08.102593  410199 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 12:26:08.102662  410199 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-144569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-144569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:26:08.102851  410199 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:26:08.102962  410199 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:26:08.144378  410199 cri.go:89] found id: ""
	I0408 12:26:08.144472  410199 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 12:26:08.156968  410199 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:26:08.168386  410199 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:26:08.179497  410199 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:26:08.179521  410199 kubeadm.go:156] found existing configuration files:
	
	I0408 12:26:08.179577  410199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:26:08.190599  410199 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:26:08.190690  410199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:26:08.202539  410199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:26:08.213505  410199 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:26:08.213571  410199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:26:08.225207  410199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:26:08.236264  410199 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:26:08.236332  410199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:26:08.248196  410199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:26:08.259586  410199 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:26:08.259666  410199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:26:08.271480  410199 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:26:08.401940  410199 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:26:08.402060  410199 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:26:08.613306  410199 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:26:08.613452  410199 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:26:08.613562  410199 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:26:08.888679  410199 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:26:08.891431  410199 out.go:204]   - Generating certificates and keys ...
	I0408 12:26:08.891551  410199 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:26:08.891650  410199 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:26:09.669520  410199 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 12:26:09.869813  410199 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 12:26:10.201060  410199 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 12:26:10.379015  410199 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 12:26:10.545208  410199 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 12:26:10.545478  410199 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-144569 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	I0408 12:26:10.618809  410199 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 12:26:10.619119  410199 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-144569 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	I0408 12:26:10.847865  410199 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 12:26:11.212538  410199 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 12:26:11.363752  410199 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 12:26:11.363950  410199 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:26:11.643992  410199 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:26:11.879395  410199 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:26:12.045894  410199 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:26:12.255506  410199 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:26:12.271749  410199 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:26:12.272918  410199 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:26:12.272973  410199 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:26:12.427526  410199 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:26:12.429338  410199 out.go:204]   - Booting up control plane ...
	I0408 12:26:12.429466  410199 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:26:12.444548  410199 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:26:12.447405  410199 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:26:12.447530  410199 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:26:12.452885  410199 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:26:52.451154  410199 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:26:52.451857  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:26:52.452134  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:26:57.452856  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:26:57.453133  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:27:07.454267  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:27:07.454615  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:27:27.455701  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:27:27.455942  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:28:07.456926  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:28:07.457572  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:28:07.457603  410199 kubeadm.go:309] 
	I0408 12:28:07.457697  410199 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:28:07.457785  410199 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:28:07.457797  410199 kubeadm.go:309] 
	I0408 12:28:07.457868  410199 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:28:07.457936  410199 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:28:07.458165  410199 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:28:07.458182  410199 kubeadm.go:309] 
	I0408 12:28:07.458399  410199 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:28:07.458468  410199 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:28:07.458536  410199 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:28:07.458550  410199 kubeadm.go:309] 
	I0408 12:28:07.458804  410199 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:28:07.459003  410199 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:28:07.459025  410199 kubeadm.go:309] 
	I0408 12:28:07.459269  410199 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:28:07.459449  410199 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:28:07.459611  410199 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:28:07.459778  410199 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:28:07.459790  410199 kubeadm.go:309] 
	I0408 12:28:07.460307  410199 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:28:07.460514  410199 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:28:07.460905  410199 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0408 12:28:07.460975  410199 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-144569 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-144569 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-144569 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-144569 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:28:07.461363  410199 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:28:09.536440  410199 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.074800365s)
	I0408 12:28:09.536522  410199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:28:09.557326  410199 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:28:09.570780  410199 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:28:09.570810  410199 kubeadm.go:156] found existing configuration files:
	
	I0408 12:28:09.570872  410199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:28:09.583980  410199 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:28:09.584072  410199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:28:09.598265  410199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:28:09.612472  410199 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:28:09.612564  410199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:28:09.626889  410199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:28:09.640810  410199 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:28:09.640908  410199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:28:09.654680  410199 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:28:09.668664  410199 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:28:09.668729  410199 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:28:09.684479  410199 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:28:10.040576  410199 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:30:06.225338  410199 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:30:06.225452  410199 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:30:06.228042  410199 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:30:06.228112  410199 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:30:06.228217  410199 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:30:06.228337  410199 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:30:06.228454  410199 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:30:06.228535  410199 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:30:06.230500  410199 out.go:204]   - Generating certificates and keys ...
	I0408 12:30:06.230607  410199 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:30:06.230710  410199 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:30:06.230812  410199 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:30:06.230890  410199 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:30:06.230991  410199 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:30:06.231065  410199 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:30:06.231145  410199 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:30:06.231243  410199 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:30:06.231373  410199 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:30:06.231503  410199 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:30:06.231578  410199 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:30:06.231664  410199 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:30:06.231774  410199 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:30:06.231860  410199 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:30:06.231946  410199 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:30:06.232028  410199 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:30:06.232163  410199 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:30:06.232264  410199 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:30:06.232300  410199 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:30:06.232400  410199 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:30:06.234092  410199 out.go:204]   - Booting up control plane ...
	I0408 12:30:06.234196  410199 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:30:06.234297  410199 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:30:06.234380  410199 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:30:06.234488  410199 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:30:06.234720  410199 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:30:06.234807  410199 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:30:06.234917  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:30:06.235190  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:30:06.235293  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:30:06.235515  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:30:06.235612  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:30:06.235836  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:30:06.235897  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:30:06.236127  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:30:06.236235  410199 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:30:06.236487  410199 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:30:06.236525  410199 kubeadm.go:309] 
	I0408 12:30:06.236595  410199 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:30:06.236680  410199 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:30:06.236705  410199 kubeadm.go:309] 
	I0408 12:30:06.236758  410199 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:30:06.236802  410199 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:30:06.236939  410199 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:30:06.236951  410199 kubeadm.go:309] 
	I0408 12:30:06.237056  410199 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:30:06.237100  410199 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:30:06.237151  410199 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:30:06.237161  410199 kubeadm.go:309] 
	I0408 12:30:06.237303  410199 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:30:06.237403  410199 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:30:06.237412  410199 kubeadm.go:309] 
	I0408 12:30:06.237507  410199 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:30:06.237597  410199 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:30:06.237665  410199 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:30:06.237732  410199 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:30:06.237753  410199 kubeadm.go:309] 
	I0408 12:30:06.237813  410199 kubeadm.go:393] duration metric: took 3m58.135158177s to StartCluster
	I0408 12:30:06.237860  410199 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:30:06.237932  410199 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:30:06.298032  410199 cri.go:89] found id: ""
	I0408 12:30:06.298072  410199 logs.go:276] 0 containers: []
	W0408 12:30:06.298085  410199 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:30:06.298094  410199 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:30:06.298166  410199 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:30:06.345050  410199 cri.go:89] found id: ""
	I0408 12:30:06.345091  410199 logs.go:276] 0 containers: []
	W0408 12:30:06.345103  410199 logs.go:278] No container was found matching "etcd"
	I0408 12:30:06.345112  410199 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:30:06.345182  410199 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:30:06.388355  410199 cri.go:89] found id: ""
	I0408 12:30:06.388384  410199 logs.go:276] 0 containers: []
	W0408 12:30:06.388396  410199 logs.go:278] No container was found matching "coredns"
	I0408 12:30:06.388405  410199 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:30:06.388494  410199 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:30:06.436410  410199 cri.go:89] found id: ""
	I0408 12:30:06.436437  410199 logs.go:276] 0 containers: []
	W0408 12:30:06.436447  410199 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:30:06.436455  410199 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:30:06.436522  410199 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:30:06.477687  410199 cri.go:89] found id: ""
	I0408 12:30:06.477722  410199 logs.go:276] 0 containers: []
	W0408 12:30:06.477734  410199 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:30:06.477743  410199 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:30:06.477822  410199 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:30:06.520807  410199 cri.go:89] found id: ""
	I0408 12:30:06.520841  410199 logs.go:276] 0 containers: []
	W0408 12:30:06.520859  410199 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:30:06.520869  410199 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:30:06.520932  410199 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:30:06.568423  410199 cri.go:89] found id: ""
	I0408 12:30:06.568455  410199 logs.go:276] 0 containers: []
	W0408 12:30:06.568466  410199 logs.go:278] No container was found matching "kindnet"
	I0408 12:30:06.568480  410199 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:30:06.568497  410199 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:30:06.702743  410199 logs.go:123] Gathering logs for container status ...
	I0408 12:30:06.702847  410199 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:30:06.756010  410199 logs.go:123] Gathering logs for kubelet ...
	I0408 12:30:06.756052  410199 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:30:06.815396  410199 logs.go:123] Gathering logs for dmesg ...
	I0408 12:30:06.815438  410199 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:30:06.836398  410199 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:30:06.836439  410199 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:30:07.017194  410199 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0408 12:30:07.017247  410199 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:30:07.017297  410199 out.go:239] * 
	* 
	W0408 12:30:07.017364  410199 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:30:07.017416  410199 out.go:239] * 
	* 
	W0408 12:30:07.018679  410199 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:30:07.023044  410199 out.go:177] 
	W0408 12:30:07.024528  410199 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:30:07.024609  410199 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:30:07.024639  410199 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:30:07.026247  410199 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-144569
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-144569: (3.212013899s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-144569 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-144569 status --format={{.Host}}: exit status 7 (92.987004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.516121897s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-144569 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (103.740068ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-144569] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-144569
	    minikube start -p kubernetes-upgrade-144569 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1445692 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-144569 --kubernetes-version=v1.30.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-144569 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.899798748s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-08 12:32:03.991718851 +0000 UTC m=+4301.778401061
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-144569 -n kubernetes-upgrade-144569
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-144569 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-144569 logs -n 25: (2.029149269s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p cert-expiration-283523             | cert-expiration-283523    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:27 UTC | 08 Apr 24 12:28 UTC |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-105795 sudo           | NoKubernetes-105795       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:28 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |                |                     |                     |
	|         | service kubelet                       |                           |         |                |                     |                     |
	| stop    | -p NoKubernetes-105795                | NoKubernetes-105795       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:28 UTC | 08 Apr 24 12:28 UTC |
	| start   | -p NoKubernetes-105795                | NoKubernetes-105795       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:28 UTC | 08 Apr 24 12:28 UTC |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-105795 sudo           | NoKubernetes-105795       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:28 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |                |                     |                     |
	|         | service kubelet                       |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-105795                | NoKubernetes-105795       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:28 UTC | 08 Apr 24 12:28 UTC |
	| start   | -p force-systemd-env-495725           | force-systemd-env-495725  | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:28 UTC | 08 Apr 24 12:29 UTC |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-115055             | running-upgrade-115055    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:29 UTC | 08 Apr 24 12:29 UTC |
	| start   | -p cert-options-064378                | cert-options-064378       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:29 UTC | 08 Apr 24 12:29 UTC |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-495725           | force-systemd-env-495725  | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:29 UTC | 08 Apr 24 12:29 UTC |
	| start   | -p pause-778946 --memory=2048         | pause-778946              | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:29 UTC | 08 Apr 24 12:31 UTC |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | cert-options-064378 ssh               | cert-options-064378       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:29 UTC | 08 Apr 24 12:29 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-064378 -- sudo        | cert-options-064378       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:29 UTC | 08 Apr 24 12:29 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-064378                | cert-options-064378       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:29 UTC | 08 Apr 24 12:29 UTC |
	| start   | -p stopped-upgrade-660392             | minikube                  | jenkins | v1.26.0        | 08 Apr 24 12:30 UTC | 08 Apr 24 12:30 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |                |                     |                     |
	|         |  --container-runtime=crio             |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-144569          | kubernetes-upgrade-144569 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:30 UTC | 08 Apr 24 12:30 UTC |
	| start   | -p kubernetes-upgrade-144569          | kubernetes-upgrade-144569 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:30 UTC | 08 Apr 24 12:31 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0     |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-660392 stop           | minikube                  | jenkins | v1.26.0        | 08 Apr 24 12:30 UTC | 08 Apr 24 12:31 UTC |
	| start   | -p stopped-upgrade-660392             | stopped-upgrade-660392    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:31 UTC | 08 Apr 24 12:31 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-144569          | kubernetes-upgrade-144569 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:31 UTC |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-144569          | kubernetes-upgrade-144569 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:31 UTC | 08 Apr 24 12:32 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0     |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p pause-778946                       | pause-778946              | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:31 UTC |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p cert-expiration-283523             | cert-expiration-283523    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:31 UTC |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p stopped-upgrade-660392             | stopped-upgrade-660392    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:31 UTC | 08 Apr 24 12:31 UTC |
	| start   | -p auto-583253 --memory=3072          | auto-583253               | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:31 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:31:49
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:31:49.690229  417855 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:31:49.690674  417855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:31:49.690691  417855 out.go:304] Setting ErrFile to fd 2...
	I0408 12:31:49.690699  417855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:31:49.691222  417855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:31:49.694883  417855 out.go:298] Setting JSON to false
	I0408 12:31:49.696009  417855 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8053,"bootTime":1712571457,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:31:49.696085  417855 start.go:139] virtualization: kvm guest
	I0408 12:31:49.698383  417855 out.go:177] * [auto-583253] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:31:49.700007  417855 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:31:49.699959  417855 notify.go:220] Checking for updates...
	I0408 12:31:49.701587  417855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:31:49.703071  417855 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:31:49.704615  417855 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:31:49.706038  417855 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:31:49.707562  417855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:31:49.709334  417855 config.go:182] Loaded profile config "cert-expiration-283523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:31:49.709441  417855 config.go:182] Loaded profile config "kubernetes-upgrade-144569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:31:49.709559  417855 config.go:182] Loaded profile config "pause-778946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:31:49.709643  417855 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:31:49.749506  417855 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 12:31:49.750820  417855 start.go:297] selected driver: kvm2
	I0408 12:31:49.750838  417855 start.go:901] validating driver "kvm2" against <nil>
	I0408 12:31:49.750850  417855 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:31:49.751799  417855 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:31:49.751889  417855 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:31:49.768149  417855 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:31:49.768212  417855 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 12:31:49.768478  417855 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:31:49.768550  417855 cni.go:84] Creating CNI manager for ""
	I0408 12:31:49.768561  417855 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:31:49.768572  417855 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 12:31:49.768651  417855 start.go:340] cluster config:
	{Name:auto-583253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-583253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:31:49.768788  417855 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:31:49.771166  417855 out.go:177] * Starting "auto-583253" primary control-plane node in "auto-583253" cluster
	I0408 12:31:45.005893  417667 machine.go:94] provisionDockerMachine start ...
	I0408 12:31:45.005914  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .DriverName
	I0408 12:31:45.006164  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:45.009047  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.009644  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:45.009671  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.009856  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:45.010051  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.010323  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.010591  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:45.010798  417667 main.go:141] libmachine: Using SSH client type: native
	I0408 12:31:45.010986  417667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.202 22 <nil> <nil>}
	I0408 12:31:45.010992  417667 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:31:45.131806  417667 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-283523
	
	I0408 12:31:45.131829  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetMachineName
	I0408 12:31:45.132314  417667 buildroot.go:166] provisioning hostname "cert-expiration-283523"
	I0408 12:31:45.132373  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetMachineName
	I0408 12:31:45.132643  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:45.135421  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.135991  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:45.136013  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.136195  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:45.136393  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.136559  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.136693  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:45.136859  417667 main.go:141] libmachine: Using SSH client type: native
	I0408 12:31:45.137082  417667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.202 22 <nil> <nil>}
	I0408 12:31:45.137102  417667 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-283523 && echo "cert-expiration-283523" | sudo tee /etc/hostname
	I0408 12:31:45.281384  417667 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-283523
	
	I0408 12:31:45.281407  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:45.284784  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.285318  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:45.285485  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.285829  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:45.286051  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.286354  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.286551  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:45.286749  417667 main.go:141] libmachine: Using SSH client type: native
	I0408 12:31:45.286983  417667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.202 22 <nil> <nil>}
	I0408 12:31:45.287001  417667 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-283523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-283523/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-283523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:31:45.406344  417667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:31:45.406367  417667 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:31:45.406460  417667 buildroot.go:174] setting up certificates
	I0408 12:31:45.406469  417667 provision.go:84] configureAuth start
	I0408 12:31:45.406480  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetMachineName
	I0408 12:31:45.406829  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetIP
	I0408 12:31:45.410173  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.410675  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:45.410713  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.410915  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:45.413723  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.414128  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:45.414147  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.414324  417667 provision.go:143] copyHostCerts
	I0408 12:31:45.414376  417667 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:31:45.414382  417667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:31:45.414435  417667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:31:45.414526  417667 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:31:45.414529  417667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:31:45.414556  417667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:31:45.414602  417667 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:31:45.414604  417667 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:31:45.414636  417667 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:31:45.414707  417667 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-283523 san=[127.0.0.1 192.168.83.202 cert-expiration-283523 localhost minikube]
	I0408 12:31:45.683015  417667 provision.go:177] copyRemoteCerts
	I0408 12:31:45.683068  417667 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:31:45.683095  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:45.686135  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.686448  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:45.686474  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.686752  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:45.686964  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.687161  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:45.687303  417667 sshutil.go:53] new ssh client: &{IP:192.168.83.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/cert-expiration-283523/id_rsa Username:docker}
	I0408 12:31:45.775766  417667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:31:45.808109  417667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 12:31:45.842483  417667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:31:45.880582  417667 provision.go:87] duration metric: took 474.09789ms to configureAuth
	I0408 12:31:45.880605  417667 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:31:45.880830  417667 config.go:182] Loaded profile config "cert-expiration-283523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:31:45.880917  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:45.884347  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.884781  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:45.884807  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:45.885016  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:45.885290  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.885535  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:45.885712  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:45.885884  417667 main.go:141] libmachine: Using SSH client type: native
	I0408 12:31:45.886097  417667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.202 22 <nil> <nil>}
	I0408 12:31:45.886115  417667 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:31:46.365061  417455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:31:46.385677  417455 api_server.go:72] duration metric: took 1.021488405s to wait for apiserver process to appear ...
	I0408 12:31:46.385714  417455 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:31:46.385737  417455 api_server.go:253] Checking apiserver healthz at https://192.168.72.75:8443/healthz ...
	I0408 12:31:46.386342  417455 api_server.go:269] stopped: https://192.168.72.75:8443/healthz: Get "https://192.168.72.75:8443/healthz": dial tcp 192.168.72.75:8443: connect: connection refused
	I0408 12:31:46.885904  417455 api_server.go:253] Checking apiserver healthz at https://192.168.72.75:8443/healthz ...
	I0408 12:31:49.456607  417455 api_server.go:279] https://192.168.72.75:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:31:49.456646  417455 api_server.go:103] status: https://192.168.72.75:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:31:49.456664  417455 api_server.go:253] Checking apiserver healthz at https://192.168.72.75:8443/healthz ...
	I0408 12:31:49.509582  417455 api_server.go:279] https://192.168.72.75:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0408 12:31:49.509617  417455 api_server.go:103] status: https://192.168.72.75:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0408 12:31:49.885924  417455 api_server.go:253] Checking apiserver healthz at https://192.168.72.75:8443/healthz ...
	I0408 12:31:49.890342  417455 api_server.go:279] https://192.168.72.75:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:31:49.890383  417455 api_server.go:103] status: https://192.168.72.75:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:31:50.386736  417455 api_server.go:253] Checking apiserver healthz at https://192.168.72.75:8443/healthz ...
	I0408 12:31:50.400916  417455 api_server.go:279] https://192.168.72.75:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:31:50.400958  417455 api_server.go:103] status: https://192.168.72.75:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:31:50.886608  417455 api_server.go:253] Checking apiserver healthz at https://192.168.72.75:8443/healthz ...
	I0408 12:31:50.891747  417455 api_server.go:279] https://192.168.72.75:8443/healthz returned 200:
	ok
	I0408 12:31:50.904234  417455 api_server.go:141] control plane version: v1.29.3
	I0408 12:31:50.904274  417455 api_server.go:131] duration metric: took 4.518552067s to wait for apiserver health ...
	I0408 12:31:50.904287  417455 cni.go:84] Creating CNI manager for ""
	I0408 12:31:50.904295  417455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:31:50.906303  417455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:31:50.907740  417455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:31:50.938591  417455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:31:50.980344  417455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:31:50.991399  417455 system_pods.go:59] 6 kube-system pods found
	I0408 12:31:50.991438  417455 system_pods.go:61] "coredns-76f75df574-vv74m" [c8045152-41fa-41ba-8117-67e6a52867c0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:31:50.991448  417455 system_pods.go:61] "etcd-pause-778946" [50ecd336-b02e-4c69-9b72-fff06eb9dd40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:31:50.991457  417455 system_pods.go:61] "kube-apiserver-pause-778946" [c2f9f358-17ba-4ad9-8767-fb3aab5e167b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:31:50.991468  417455 system_pods.go:61] "kube-controller-manager-pause-778946" [9bb826f6-488c-4888-ab85-c4f4fc2fb656] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:31:50.991476  417455 system_pods.go:61] "kube-proxy-lfqvs" [a39cacc6-aa4b-4918-8f8b-6b269ce85f10] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 12:31:50.991484  417455 system_pods.go:61] "kube-scheduler-pause-778946" [5f837174-bef0-40f7-bc88-5741a3fe456e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:31:50.991493  417455 system_pods.go:74] duration metric: took 11.125992ms to wait for pod list to return data ...
	I0408 12:31:50.991505  417455 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:31:50.995031  417455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:31:50.995070  417455 node_conditions.go:123] node cpu capacity is 2
	I0408 12:31:50.995084  417455 node_conditions.go:105] duration metric: took 3.572014ms to run NodePressure ...
	I0408 12:31:50.995106  417455 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:31:49.772739  417855 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:31:49.772792  417855 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 12:31:49.772814  417855 cache.go:56] Caching tarball of preloaded images
	I0408 12:31:49.772932  417855 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:31:49.772944  417855 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0408 12:31:49.773053  417855 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/config.json ...
	I0408 12:31:49.773072  417855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/config.json: {Name:mkfc44e46cdbded5629729ddf03ab06a9e5803bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:31:49.773210  417855 start.go:360] acquireMachinesLock for auto-583253: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:31:51.901430  417855 start.go:364] duration metric: took 2.128177227s to acquireMachinesLock for "auto-583253"
	I0408 12:31:51.901507  417855 start.go:93] Provisioning new machine with config: &{Name:auto-583253 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:auto-583253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:31:51.901736  417855 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 12:31:51.903823  417855 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0408 12:31:51.904133  417855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:31:51.904187  417855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:31:51.924430  417855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I0408 12:31:51.924950  417855 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:31:51.925548  417855 main.go:141] libmachine: Using API Version  1
	I0408 12:31:51.925571  417855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:31:51.925969  417855 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:31:51.926206  417855 main.go:141] libmachine: (auto-583253) Calling .GetMachineName
	I0408 12:31:51.926394  417855 main.go:141] libmachine: (auto-583253) Calling .DriverName
	I0408 12:31:51.926546  417855 start.go:159] libmachine.API.Create for "auto-583253" (driver="kvm2")
	I0408 12:31:51.926582  417855 client.go:168] LocalClient.Create starting
	I0408 12:31:51.926648  417855 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 12:31:51.926698  417855 main.go:141] libmachine: Decoding PEM data...
	I0408 12:31:51.926724  417855 main.go:141] libmachine: Parsing certificate...
	I0408 12:31:51.926793  417855 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 12:31:51.926820  417855 main.go:141] libmachine: Decoding PEM data...
	I0408 12:31:51.926838  417855 main.go:141] libmachine: Parsing certificate...
	I0408 12:31:51.926864  417855 main.go:141] libmachine: Running pre-create checks...
	I0408 12:31:51.926882  417855 main.go:141] libmachine: (auto-583253) Calling .PreCreateCheck
	I0408 12:31:51.927321  417855 main.go:141] libmachine: (auto-583253) Calling .GetConfigRaw
	I0408 12:31:51.927876  417855 main.go:141] libmachine: Creating machine...
	I0408 12:31:51.927892  417855 main.go:141] libmachine: (auto-583253) Calling .Create
	I0408 12:31:51.928058  417855 main.go:141] libmachine: (auto-583253) Creating KVM machine...
	I0408 12:31:51.929419  417855 main.go:141] libmachine: (auto-583253) DBG | found existing default KVM network
	I0408 12:31:51.931843  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:51.931645  417878 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f820}
	I0408 12:31:51.931869  417855 main.go:141] libmachine: (auto-583253) DBG | created network xml: 
	I0408 12:31:51.931884  417855 main.go:141] libmachine: (auto-583253) DBG | <network>
	I0408 12:31:51.931899  417855 main.go:141] libmachine: (auto-583253) DBG |   <name>mk-auto-583253</name>
	I0408 12:31:51.931909  417855 main.go:141] libmachine: (auto-583253) DBG |   <dns enable='no'/>
	I0408 12:31:51.931920  417855 main.go:141] libmachine: (auto-583253) DBG |   
	I0408 12:31:51.931957  417855 main.go:141] libmachine: (auto-583253) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 12:31:51.931997  417855 main.go:141] libmachine: (auto-583253) DBG |     <dhcp>
	I0408 12:31:51.932010  417855 main.go:141] libmachine: (auto-583253) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 12:31:51.932025  417855 main.go:141] libmachine: (auto-583253) DBG |     </dhcp>
	I0408 12:31:51.932060  417855 main.go:141] libmachine: (auto-583253) DBG |   </ip>
	I0408 12:31:51.932091  417855 main.go:141] libmachine: (auto-583253) DBG |   
	I0408 12:31:51.932108  417855 main.go:141] libmachine: (auto-583253) DBG | </network>
	I0408 12:31:51.932118  417855 main.go:141] libmachine: (auto-583253) DBG | 
	I0408 12:31:51.937868  417855 main.go:141] libmachine: (auto-583253) DBG | trying to create private KVM network mk-auto-583253 192.168.39.0/24...
	I0408 12:31:52.015284  417855 main.go:141] libmachine: (auto-583253) DBG | private KVM network mk-auto-583253 192.168.39.0/24 created
	I0408 12:31:52.015320  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:52.015254  417878 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:31:52.015333  417855 main.go:141] libmachine: (auto-583253) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/auto-583253 ...
	I0408 12:31:52.015365  417855 main.go:141] libmachine: (auto-583253) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 12:31:52.015431  417855 main.go:141] libmachine: (auto-583253) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 12:31:52.286647  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:52.286478  417878 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/auto-583253/id_rsa...
	I0408 12:31:52.491067  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:52.490904  417878 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/auto-583253/auto-583253.rawdisk...
	I0408 12:31:52.491104  417855 main.go:141] libmachine: (auto-583253) DBG | Writing magic tar header
	I0408 12:31:52.491121  417855 main.go:141] libmachine: (auto-583253) DBG | Writing SSH key tar header
	I0408 12:31:52.491142  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:52.491031  417878 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/auto-583253 ...
	I0408 12:31:52.491159  417855 main.go:141] libmachine: (auto-583253) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/auto-583253
	I0408 12:31:52.491253  417855 main.go:141] libmachine: (auto-583253) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 12:31:52.491292  417855 main.go:141] libmachine: (auto-583253) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:31:52.491305  417855 main.go:141] libmachine: (auto-583253) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/auto-583253 (perms=drwx------)
	I0408 12:31:52.491321  417855 main.go:141] libmachine: (auto-583253) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 12:31:52.491336  417855 main.go:141] libmachine: (auto-583253) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 12:31:52.491351  417855 main.go:141] libmachine: (auto-583253) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 12:31:52.491364  417855 main.go:141] libmachine: (auto-583253) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 12:31:52.491378  417855 main.go:141] libmachine: (auto-583253) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 12:31:52.491391  417855 main.go:141] libmachine: (auto-583253) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 12:31:52.491402  417855 main.go:141] libmachine: (auto-583253) Creating domain...
	I0408 12:31:52.491412  417855 main.go:141] libmachine: (auto-583253) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 12:31:52.491426  417855 main.go:141] libmachine: (auto-583253) DBG | Checking permissions on dir: /home/jenkins
	I0408 12:31:52.491437  417855 main.go:141] libmachine: (auto-583253) DBG | Checking permissions on dir: /home
	I0408 12:31:52.491449  417855 main.go:141] libmachine: (auto-583253) DBG | Skipping /home - not owner
	I0408 12:31:52.492848  417855 main.go:141] libmachine: (auto-583253) define libvirt domain using xml: 
	I0408 12:31:52.492882  417855 main.go:141] libmachine: (auto-583253) <domain type='kvm'>
	I0408 12:31:52.492894  417855 main.go:141] libmachine: (auto-583253)   <name>auto-583253</name>
	I0408 12:31:52.492916  417855 main.go:141] libmachine: (auto-583253)   <memory unit='MiB'>3072</memory>
	I0408 12:31:52.492927  417855 main.go:141] libmachine: (auto-583253)   <vcpu>2</vcpu>
	I0408 12:31:52.492938  417855 main.go:141] libmachine: (auto-583253)   <features>
	I0408 12:31:52.492949  417855 main.go:141] libmachine: (auto-583253)     <acpi/>
	I0408 12:31:52.492960  417855 main.go:141] libmachine: (auto-583253)     <apic/>
	I0408 12:31:52.492971  417855 main.go:141] libmachine: (auto-583253)     <pae/>
	I0408 12:31:52.493002  417855 main.go:141] libmachine: (auto-583253)     
	I0408 12:31:52.493010  417855 main.go:141] libmachine: (auto-583253)   </features>
	I0408 12:31:52.493016  417855 main.go:141] libmachine: (auto-583253)   <cpu mode='host-passthrough'>
	I0408 12:31:52.493022  417855 main.go:141] libmachine: (auto-583253)   
	I0408 12:31:52.493029  417855 main.go:141] libmachine: (auto-583253)   </cpu>
	I0408 12:31:52.493035  417855 main.go:141] libmachine: (auto-583253)   <os>
	I0408 12:31:52.493040  417855 main.go:141] libmachine: (auto-583253)     <type>hvm</type>
	I0408 12:31:52.493045  417855 main.go:141] libmachine: (auto-583253)     <boot dev='cdrom'/>
	I0408 12:31:52.493052  417855 main.go:141] libmachine: (auto-583253)     <boot dev='hd'/>
	I0408 12:31:52.493058  417855 main.go:141] libmachine: (auto-583253)     <bootmenu enable='no'/>
	I0408 12:31:52.493065  417855 main.go:141] libmachine: (auto-583253)   </os>
	I0408 12:31:52.493101  417855 main.go:141] libmachine: (auto-583253)   <devices>
	I0408 12:31:52.493127  417855 main.go:141] libmachine: (auto-583253)     <disk type='file' device='cdrom'>
	I0408 12:31:52.493142  417855 main.go:141] libmachine: (auto-583253)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/auto-583253/boot2docker.iso'/>
	I0408 12:31:52.493160  417855 main.go:141] libmachine: (auto-583253)       <target dev='hdc' bus='scsi'/>
	I0408 12:31:52.493174  417855 main.go:141] libmachine: (auto-583253)       <readonly/>
	I0408 12:31:52.493185  417855 main.go:141] libmachine: (auto-583253)     </disk>
	I0408 12:31:52.493211  417855 main.go:141] libmachine: (auto-583253)     <disk type='file' device='disk'>
	I0408 12:31:52.493225  417855 main.go:141] libmachine: (auto-583253)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 12:31:52.493281  417855 main.go:141] libmachine: (auto-583253)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/auto-583253/auto-583253.rawdisk'/>
	I0408 12:31:52.493308  417855 main.go:141] libmachine: (auto-583253)       <target dev='hda' bus='virtio'/>
	I0408 12:31:52.493320  417855 main.go:141] libmachine: (auto-583253)     </disk>
	I0408 12:31:52.493331  417855 main.go:141] libmachine: (auto-583253)     <interface type='network'>
	I0408 12:31:52.493339  417855 main.go:141] libmachine: (auto-583253)       <source network='mk-auto-583253'/>
	I0408 12:31:52.493356  417855 main.go:141] libmachine: (auto-583253)       <model type='virtio'/>
	I0408 12:31:52.493367  417855 main.go:141] libmachine: (auto-583253)     </interface>
	I0408 12:31:52.493381  417855 main.go:141] libmachine: (auto-583253)     <interface type='network'>
	I0408 12:31:52.493392  417855 main.go:141] libmachine: (auto-583253)       <source network='default'/>
	I0408 12:31:52.493404  417855 main.go:141] libmachine: (auto-583253)       <model type='virtio'/>
	I0408 12:31:52.493414  417855 main.go:141] libmachine: (auto-583253)     </interface>
	I0408 12:31:52.493425  417855 main.go:141] libmachine: (auto-583253)     <serial type='pty'>
	I0408 12:31:52.493435  417855 main.go:141] libmachine: (auto-583253)       <target port='0'/>
	I0408 12:31:52.493440  417855 main.go:141] libmachine: (auto-583253)     </serial>
	I0408 12:31:52.493450  417855 main.go:141] libmachine: (auto-583253)     <console type='pty'>
	I0408 12:31:52.493467  417855 main.go:141] libmachine: (auto-583253)       <target type='serial' port='0'/>
	I0408 12:31:52.493477  417855 main.go:141] libmachine: (auto-583253)     </console>
	I0408 12:31:52.493486  417855 main.go:141] libmachine: (auto-583253)     <rng model='virtio'>
	I0408 12:31:52.493502  417855 main.go:141] libmachine: (auto-583253)       <backend model='random'>/dev/random</backend>
	I0408 12:31:52.493514  417855 main.go:141] libmachine: (auto-583253)     </rng>
	I0408 12:31:52.493521  417855 main.go:141] libmachine: (auto-583253)     
	I0408 12:31:52.493529  417855 main.go:141] libmachine: (auto-583253)     
	I0408 12:31:52.493540  417855 main.go:141] libmachine: (auto-583253)   </devices>
	I0408 12:31:52.493552  417855 main.go:141] libmachine: (auto-583253) </domain>
	I0408 12:31:52.493562  417855 main.go:141] libmachine: (auto-583253) 
	I0408 12:31:52.498540  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:36:3d:10 in network default
	I0408 12:31:52.499323  417855 main.go:141] libmachine: (auto-583253) Ensuring networks are active...
	I0408 12:31:52.499352  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:52.500099  417855 main.go:141] libmachine: (auto-583253) Ensuring network default is active
	I0408 12:31:52.500483  417855 main.go:141] libmachine: (auto-583253) Ensuring network mk-auto-583253 is active
	I0408 12:31:52.501102  417855 main.go:141] libmachine: (auto-583253) Getting domain xml...
	I0408 12:31:52.501975  417855 main.go:141] libmachine: (auto-583253) Creating domain...
	I0408 12:31:53.804528  417855 main.go:141] libmachine: (auto-583253) Waiting to get IP...
	I0408 12:31:53.805369  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:53.805891  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:53.805924  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:53.805857  417878 retry.go:31] will retry after 240.939735ms: waiting for machine to come up
	I0408 12:31:54.048539  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:54.049141  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:54.049172  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:54.049093  417878 retry.go:31] will retry after 360.180376ms: waiting for machine to come up
	I0408 12:31:54.410630  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:54.411167  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:54.411200  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:54.411104  417878 retry.go:31] will retry after 434.275844ms: waiting for machine to come up
	I0408 12:31:51.645236  417667 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:31:51.645255  417667 machine.go:97] duration metric: took 6.639352131s to provisionDockerMachine
	I0408 12:31:51.645267  417667 start.go:293] postStartSetup for "cert-expiration-283523" (driver="kvm2")
	I0408 12:31:51.645276  417667 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:31:51.645292  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .DriverName
	I0408 12:31:51.645664  417667 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:31:51.645696  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:51.649260  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.649714  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:51.649734  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.650011  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:51.650245  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:51.650466  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:51.650609  417667 sshutil.go:53] new ssh client: &{IP:192.168.83.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/cert-expiration-283523/id_rsa Username:docker}
	I0408 12:31:51.739758  417667 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:31:51.745195  417667 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:31:51.745217  417667 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:31:51.745285  417667 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:31:51.745362  417667 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:31:51.745472  417667 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:31:51.755675  417667 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:31:51.785456  417667 start.go:296] duration metric: took 140.17294ms for postStartSetup
	I0408 12:31:51.785496  417667 fix.go:56] duration metric: took 6.803503593s for fixHost
	I0408 12:31:51.785520  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:51.788867  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.789326  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:51.789345  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.789534  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:51.789779  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:51.789924  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:51.790072  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:51.790264  417667 main.go:141] libmachine: Using SSH client type: native
	I0408 12:31:51.790458  417667 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.202 22 <nil> <nil>}
	I0408 12:31:51.790464  417667 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:31:51.901271  417667 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712579511.892756106
	
	I0408 12:31:51.901284  417667 fix.go:216] guest clock: 1712579511.892756106
	I0408 12:31:51.901290  417667 fix.go:229] Guest: 2024-04-08 12:31:51.892756106 +0000 UTC Remote: 2024-04-08 12:31:51.785499362 +0000 UTC m=+6.966188737 (delta=107.256744ms)
	I0408 12:31:51.901310  417667 fix.go:200] guest clock delta is within tolerance: 107.256744ms
	I0408 12:31:51.901315  417667 start.go:83] releasing machines lock for "cert-expiration-283523", held for 6.919332589s
	I0408 12:31:51.901334  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .DriverName
	I0408 12:31:51.901631  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetIP
	I0408 12:31:51.904431  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.904902  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:51.904939  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.905097  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .DriverName
	I0408 12:31:51.905669  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .DriverName
	I0408 12:31:51.905840  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .DriverName
	I0408 12:31:51.905940  417667 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:31:51.905975  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:51.906044  417667 ssh_runner.go:195] Run: cat /version.json
	I0408 12:31:51.906053  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHHostname
	I0408 12:31:51.908972  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.909063  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.909471  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:51.909492  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.909530  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:d9:3b", ip: ""} in network mk-cert-expiration-283523: {Iface:virbr1 ExpiryTime:2024-04-08 13:28:16 +0000 UTC Type:0 Mac:52:54:00:7c:d9:3b Iaid: IPaddr:192.168.83.202 Prefix:24 Hostname:cert-expiration-283523 Clientid:01:52:54:00:7c:d9:3b}
	I0408 12:31:51.909540  417667 main.go:141] libmachine: (cert-expiration-283523) DBG | domain cert-expiration-283523 has defined IP address 192.168.83.202 and MAC address 52:54:00:7c:d9:3b in network mk-cert-expiration-283523
	I0408 12:31:51.909566  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:51.909732  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:51.909732  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHPort
	I0408 12:31:51.909872  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:51.909912  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHKeyPath
	I0408 12:31:51.910032  417667 main.go:141] libmachine: (cert-expiration-283523) Calling .GetSSHUsername
	I0408 12:31:51.910041  417667 sshutil.go:53] new ssh client: &{IP:192.168.83.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/cert-expiration-283523/id_rsa Username:docker}
	I0408 12:31:51.910151  417667 sshutil.go:53] new ssh client: &{IP:192.168.83.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/cert-expiration-283523/id_rsa Username:docker}
	I0408 12:31:51.997377  417667 ssh_runner.go:195] Run: systemctl --version
	I0408 12:31:52.026582  417667 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:31:52.244848  417667 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:31:52.315094  417667 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:31:52.315177  417667 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:31:52.350367  417667 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 12:31:52.350388  417667 start.go:494] detecting cgroup driver to use...
	I0408 12:31:52.350486  417667 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:31:52.397378  417667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:31:52.449022  417667 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:31:52.449079  417667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:31:52.532737  417667 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:31:52.603567  417667 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:31:52.804194  417667 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:31:52.997720  417667 docker.go:233] disabling docker service ...
	I0408 12:31:52.997804  417667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:31:53.018731  417667 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:31:53.042756  417667 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:31:53.210089  417667 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:31:53.375334  417667 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:31:53.395313  417667 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:31:53.422160  417667 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:31:53.422224  417667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:31:53.437479  417667 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:31:53.437559  417667 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:31:53.451744  417667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:31:53.467245  417667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:31:53.483723  417667 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:31:53.498478  417667 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:31:53.511660  417667 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:31:53.524736  417667 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:31:53.539809  417667 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:31:53.555681  417667 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:31:53.566412  417667 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:31:53.732872  417667 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:31:51.454892  417455 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:31:51.465775  417455 kubeadm.go:733] kubelet initialised
	I0408 12:31:51.465804  417455 kubeadm.go:734] duration metric: took 10.88052ms waiting for restarted kubelet to initialise ...
	I0408 12:31:51.465814  417455 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:31:51.473338  417455 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-vv74m" in "kube-system" namespace to be "Ready" ...
	I0408 12:31:53.483851  417455 pod_ready.go:102] pod "coredns-76f75df574-vv74m" in "kube-system" namespace has status "Ready":"False"
	I0408 12:31:55.984782  417455 pod_ready.go:102] pod "coredns-76f75df574-vv74m" in "kube-system" namespace has status "Ready":"False"
	I0408 12:31:54.413344  417387 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894 2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d 2802077324c14003b0b84d40f6db0b7c5571377ef4b0ddbc160a4b0db6186dd4 2e76e1748bd482ed79118567cdb71722c9299f74bdea35ecd5f4224b7e0ace14 811b06480d5713278b0f862923ed5e7ed52981e46a79678b32033194f38d70f7 05b52699faeb00901ec432a2751f82b7b61fb76b0409e9032e2b91e72bdcf50a b9ac7c2b9d4dbd4d7e388c5654605a00b73e643bb6d1510dd020e2ac2d094bcd 7a7590a189986a440dbe63546a67ee25f73aaff77394aebe65c265c3ff03b7fe ca027e5636f457694efbdcfad3aca602840540a93a268a9f8a8f10355d93c3c7 98f2895446de5bb55dc01052b806d909c933b9bca33ec2cafed20670b345b2bc: (14.840207481s)
	W0408 12:31:54.413441  417387 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894 2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d 2802077324c14003b0b84d40f6db0b7c5571377ef4b0ddbc160a4b0db6186dd4 2e76e1748bd482ed79118567cdb71722c9299f74bdea35ecd5f4224b7e0ace14 811b06480d5713278b0f862923ed5e7ed52981e46a79678b32033194f38d70f7 05b52699faeb00901ec432a2751f82b7b61fb76b0409e9032e2b91e72bdcf50a b9ac7c2b9d4dbd4d7e388c5654605a00b73e643bb6d1510dd020e2ac2d094bcd 7a7590a189986a440dbe63546a67ee25f73aaff77394aebe65c265c3ff03b7fe ca027e5636f457694efbdcfad3aca602840540a93a268a9f8a8f10355d93c3c7 98f2895446de5bb55dc01052b806d909c933b9bca33ec2cafed20670b345b2bc: Process exited with status 1
	stdout:
	2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894
	2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d
	
	stderr:
	E0408 12:31:54.396961    3857 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2802077324c14003b0b84d40f6db0b7c5571377ef4b0ddbc160a4b0db6186dd4\": container with ID starting with 2802077324c14003b0b84d40f6db0b7c5571377ef4b0ddbc160a4b0db6186dd4 not found: ID does not exist" containerID="2802077324c14003b0b84d40f6db0b7c5571377ef4b0ddbc160a4b0db6186dd4"
	time="2024-04-08T12:31:54Z" level=fatal msg="stopping the container \"2802077324c14003b0b84d40f6db0b7c5571377ef4b0ddbc160a4b0db6186dd4\": rpc error: code = NotFound desc = could not find container \"2802077324c14003b0b84d40f6db0b7c5571377ef4b0ddbc160a4b0db6186dd4\": container with ID starting with 2802077324c14003b0b84d40f6db0b7c5571377ef4b0ddbc160a4b0db6186dd4 not found: ID does not exist"
	I0408 12:31:54.413535  417387 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:31:54.469207  417387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:31:54.485924  417387 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Apr  8 12:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Apr  8 12:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Apr  8 12:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Apr  8 12:31 /etc/kubernetes/scheduler.conf
	
	I0408 12:31:54.486011  417387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:31:54.498174  417387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:31:54.510291  417387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:31:54.521461  417387 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:31:54.521539  417387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:31:54.533788  417387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:31:54.545768  417387 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:31:54.545832  417387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:31:54.557029  417387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:31:54.568260  417387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:31:54.635356  417387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:31:55.621330  417387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:31:55.875936  417387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:31:55.952175  417387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:31:56.105035  417387 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:31:56.105153  417387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:31:56.606086  417387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:31:57.105606  417387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:31:57.122712  417387 api_server.go:72] duration metric: took 1.017681829s to wait for apiserver process to appear ...
	I0408 12:31:57.122752  417387 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:31:57.122779  417387 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0408 12:31:54.846635  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:54.847198  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:54.847281  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:54.847142  417878 retry.go:31] will retry after 543.318659ms: waiting for machine to come up
	I0408 12:31:55.392073  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:55.392642  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:55.392671  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:55.392584  417878 retry.go:31] will retry after 641.715276ms: waiting for machine to come up
	I0408 12:31:56.036610  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:56.037313  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:56.037348  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:56.037244  417878 retry.go:31] will retry after 739.859229ms: waiting for machine to come up
	I0408 12:31:56.779587  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:56.780236  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:56.780267  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:56.780170  417878 retry.go:31] will retry after 1.143442861s: waiting for machine to come up
	I0408 12:31:57.925085  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:57.925641  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:57.925672  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:57.925588  417878 retry.go:31] will retry after 1.131605597s: waiting for machine to come up
	I0408 12:31:59.058914  417855 main.go:141] libmachine: (auto-583253) DBG | domain auto-583253 has defined MAC address 52:54:00:94:65:02 in network mk-auto-583253
	I0408 12:31:59.059498  417855 main.go:141] libmachine: (auto-583253) DBG | unable to find current IP address of domain auto-583253 in network mk-auto-583253
	I0408 12:31:59.059527  417855 main.go:141] libmachine: (auto-583253) DBG | I0408 12:31:59.059421  417878 retry.go:31] will retry after 1.154471177s: waiting for machine to come up
	I0408 12:31:57.986707  417455 pod_ready.go:102] pod "coredns-76f75df574-vv74m" in "kube-system" namespace has status "Ready":"False"
	I0408 12:31:59.482581  417455 pod_ready.go:92] pod "coredns-76f75df574-vv74m" in "kube-system" namespace has status "Ready":"True"
	I0408 12:31:59.482613  417455 pod_ready.go:81] duration metric: took 8.009238215s for pod "coredns-76f75df574-vv74m" in "kube-system" namespace to be "Ready" ...
	I0408 12:31:59.482630  417455 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-778946" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:00.990400  417455 pod_ready.go:92] pod "etcd-pause-778946" in "kube-system" namespace has status "Ready":"True"
	I0408 12:32:00.990432  417455 pod_ready.go:81] duration metric: took 1.507792575s for pod "etcd-pause-778946" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:00.990446  417455 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-778946" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:00.995791  417455 pod_ready.go:92] pod "kube-apiserver-pause-778946" in "kube-system" namespace has status "Ready":"True"
	I0408 12:32:00.995821  417455 pod_ready.go:81] duration metric: took 5.366502ms for pod "kube-apiserver-pause-778946" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:00.995836  417455 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-778946" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:00.360210  417387 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:32:00.360252  417387 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:32:00.360268  417387 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0408 12:32:00.396957  417387 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:32:00.396995  417387 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:32:00.623338  417387 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0408 12:32:00.628899  417387 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:32:00.628948  417387 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:32:01.123112  417387 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0408 12:32:01.128585  417387 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:32:01.128613  417387 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:32:01.623177  417387 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0408 12:32:01.635343  417387 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:32:01.635391  417387 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:32:02.123177  417387 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0408 12:32:02.129140  417387 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0408 12:32:02.137557  417387 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:32:02.137599  417387 api_server.go:131] duration metric: took 5.014837898s to wait for apiserver health ...
	I0408 12:32:02.137613  417387 cni.go:84] Creating CNI manager for ""
	I0408 12:32:02.137622  417387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:32:02.139653  417387 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:32:02.141164  417387 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:32:02.154134  417387 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:32:02.179888  417387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:32:02.192986  417387 system_pods.go:59] 8 kube-system pods found
	I0408 12:32:02.193049  417387 system_pods.go:61] "coredns-7db6d8ff4d-ptd66" [2dccfe48-bff6-4bf2-8a71-38b44b2ae86e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:32:02.193060  417387 system_pods.go:61] "coredns-7db6d8ff4d-sf82f" [14fde41c-94e0-4f1b-adbf-6980c19ee6fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:32:02.193072  417387 system_pods.go:61] "etcd-kubernetes-upgrade-144569" [83e4dec5-0da1-4681-8486-98ab927aa4e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:32:02.193082  417387 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-144569" [9f40390e-921c-46dc-8117-da91a93bf65c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:32:02.193094  417387 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-144569" [c1f96796-bde5-4132-8c79-d42e3ef7d679] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:32:02.193105  417387 system_pods.go:61] "kube-proxy-ln7vt" [3f711f98-9437-464b-b70a-17f76823ab84] Running
	I0408 12:32:02.193114  417387 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-144569" [972424f5-f915-46b8-9634-aa5c3d2ccd71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:32:02.193130  417387 system_pods.go:61] "storage-provisioner" [d4df2886-3560-46e4-970f-35af3f25354d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 12:32:02.193143  417387 system_pods.go:74] duration metric: took 13.222124ms to wait for pod list to return data ...
	I0408 12:32:02.193157  417387 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:32:02.196876  417387 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:32:02.196918  417387 node_conditions.go:123] node cpu capacity is 2
	I0408 12:32:02.196931  417387 node_conditions.go:105] duration metric: took 3.768867ms to run NodePressure ...
	I0408 12:32:02.196966  417387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:32:02.588081  417387 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:32:02.604979  417387 ops.go:34] apiserver oom_adj: -16
	I0408 12:32:02.605014  417387 kubeadm.go:591] duration metric: took 23.119400321s to restartPrimaryControlPlane
	I0408 12:32:02.605027  417387 kubeadm.go:393] duration metric: took 23.229514654s to StartCluster
	I0408 12:32:02.605051  417387 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:32:02.605148  417387 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:32:02.607018  417387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:32:02.607338  417387 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:32:02.609059  417387 out.go:177] * Verifying Kubernetes components...
	I0408 12:32:02.607481  417387 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:32:02.607648  417387 config.go:182] Loaded profile config "kubernetes-upgrade-144569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:32:02.610642  417387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:32:02.610647  417387 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-144569"
	I0408 12:32:02.610816  417387 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-144569"
	W0408 12:32:02.610836  417387 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:32:02.610648  417387 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-144569"
	I0408 12:32:02.610917  417387 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-144569"
	I0408 12:32:02.610870  417387 host.go:66] Checking if "kubernetes-upgrade-144569" exists ...
	I0408 12:32:02.611341  417387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:32:02.611382  417387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:32:02.611382  417387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:32:02.611418  417387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:32:02.633155  417387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0408 12:32:02.633420  417387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33141
	I0408 12:32:02.633614  417387 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:32:02.634079  417387 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:32:02.634474  417387 main.go:141] libmachine: Using API Version  1
	I0408 12:32:02.634499  417387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:32:02.634634  417387 main.go:141] libmachine: Using API Version  1
	I0408 12:32:02.634645  417387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:32:02.635025  417387 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:32:02.635080  417387 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:32:02.635719  417387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:32:02.635772  417387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:32:02.635778  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetState
	I0408 12:32:02.638162  417387 kapi.go:59] client config for kubernetes-upgrade-144569: &rest.Config{Host:"https://192.168.50.62:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/client.crt", KeyFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kubernetes-upgrade-144569/client.key", CAFile:"/home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5db80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 12:32:02.638455  417387 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-144569"
	W0408 12:32:02.638472  417387 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:32:02.638496  417387 host.go:66] Checking if "kubernetes-upgrade-144569" exists ...
	I0408 12:32:02.638736  417387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:32:02.638788  417387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:32:02.655058  417387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
	I0408 12:32:02.655547  417387 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:32:02.659738  417387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0408 12:32:02.659879  417387 main.go:141] libmachine: Using API Version  1
	I0408 12:32:02.659906  417387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:32:02.660221  417387 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:32:02.660339  417387 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:32:02.660973  417387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:32:02.661021  417387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:32:02.661421  417387 main.go:141] libmachine: Using API Version  1
	I0408 12:32:02.661445  417387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:32:02.661799  417387 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:32:02.662048  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetState
	I0408 12:32:02.664190  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:32:02.667204  417387 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:32:03.003615  417455 pod_ready.go:92] pod "kube-controller-manager-pause-778946" in "kube-system" namespace has status "Ready":"True"
	I0408 12:32:03.003643  417455 pod_ready.go:81] duration metric: took 2.007798178s for pod "kube-controller-manager-pause-778946" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:03.003659  417455 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lfqvs" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:03.009760  417455 pod_ready.go:92] pod "kube-proxy-lfqvs" in "kube-system" namespace has status "Ready":"True"
	I0408 12:32:03.009787  417455 pod_ready.go:81] duration metric: took 6.119136ms for pod "kube-proxy-lfqvs" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:03.009800  417455 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-778946" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:03.523627  417455 pod_ready.go:92] pod "kube-scheduler-pause-778946" in "kube-system" namespace has status "Ready":"True"
	I0408 12:32:03.523656  417455 pod_ready.go:81] duration metric: took 513.846684ms for pod "kube-scheduler-pause-778946" in "kube-system" namespace to be "Ready" ...
	I0408 12:32:03.523668  417455 pod_ready.go:38] duration metric: took 12.057844089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:32:03.523701  417455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:32:03.542371  417455 ops.go:34] apiserver oom_adj: -16
	I0408 12:32:03.542400  417455 kubeadm.go:591] duration metric: took 19.988175607s to restartPrimaryControlPlane
	I0408 12:32:03.542415  417455 kubeadm.go:393] duration metric: took 20.073797459s to StartCluster
	I0408 12:32:03.542441  417455 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:32:03.542543  417455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:32:03.544070  417455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:32:03.544389  417455 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.75 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:32:03.546248  417455 out.go:177] * Verifying Kubernetes components...
	I0408 12:32:03.544494  417455 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:32:03.544665  417455 config.go:182] Loaded profile config "pause-778946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:32:03.547784  417455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:32:03.549430  417455 out.go:177] * Enabled addons: 
	I0408 12:32:02.668960  417387 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:32:02.668988  417387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:32:02.669028  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:32:02.673188  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:32:02.674058  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:32:02.674084  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:32:02.674358  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:32:02.675191  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:32:02.675516  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:32:02.675775  417387 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa Username:docker}
	I0408 12:32:02.685257  417387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I0408 12:32:02.685823  417387 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:32:02.686668  417387 main.go:141] libmachine: Using API Version  1
	I0408 12:32:02.686694  417387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:32:02.687341  417387 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:32:02.687637  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetState
	I0408 12:32:02.690013  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .DriverName
	I0408 12:32:02.690313  417387 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:32:02.690335  417387 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:32:02.690357  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHHostname
	I0408 12:32:02.693744  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:32:02.695820  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHPort
	I0408 12:32:02.695825  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:e9:e6", ip: ""} in network mk-kubernetes-upgrade-144569: {Iface:virbr2 ExpiryTime:2024-04-08 13:25:49 +0000 UTC Type:0 Mac:52:54:00:c1:e9:e6 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:kubernetes-upgrade-144569 Clientid:01:52:54:00:c1:e9:e6}
	I0408 12:32:02.695852  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | domain kubernetes-upgrade-144569 has defined IP address 192.168.50.62 and MAC address 52:54:00:c1:e9:e6 in network mk-kubernetes-upgrade-144569
	I0408 12:32:02.696269  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHKeyPath
	I0408 12:32:02.696487  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .GetSSHUsername
	I0408 12:32:02.696663  417387 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/kubernetes-upgrade-144569/id_rsa Username:docker}
	I0408 12:32:02.892101  417387 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:32:02.918311  417387 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:32:02.918418  417387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:32:02.945015  417387 api_server.go:72] duration metric: took 337.629482ms to wait for apiserver process to appear ...
	I0408 12:32:02.945056  417387 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:32:02.945089  417387 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0408 12:32:02.951760  417387 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0408 12:32:02.952963  417387 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:32:02.952989  417387 api_server.go:131] duration metric: took 7.928202ms to wait for apiserver health ...
	I0408 12:32:02.952998  417387 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:32:02.960179  417387 system_pods.go:59] 8 kube-system pods found
	I0408 12:32:02.960212  417387 system_pods.go:61] "coredns-7db6d8ff4d-ptd66" [2dccfe48-bff6-4bf2-8a71-38b44b2ae86e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:32:02.960223  417387 system_pods.go:61] "coredns-7db6d8ff4d-sf82f" [14fde41c-94e0-4f1b-adbf-6980c19ee6fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:32:02.960235  417387 system_pods.go:61] "etcd-kubernetes-upgrade-144569" [83e4dec5-0da1-4681-8486-98ab927aa4e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:32:02.960244  417387 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-144569" [9f40390e-921c-46dc-8117-da91a93bf65c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:32:02.960258  417387 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-144569" [c1f96796-bde5-4132-8c79-d42e3ef7d679] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:32:02.960267  417387 system_pods.go:61] "kube-proxy-ln7vt" [3f711f98-9437-464b-b70a-17f76823ab84] Running
	I0408 12:32:02.960277  417387 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-144569" [972424f5-f915-46b8-9634-aa5c3d2ccd71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:32:02.960286  417387 system_pods.go:61] "storage-provisioner" [d4df2886-3560-46e4-970f-35af3f25354d] Running
	I0408 12:32:02.960295  417387 system_pods.go:74] duration metric: took 7.290262ms to wait for pod list to return data ...
	I0408 12:32:02.960309  417387 kubeadm.go:576] duration metric: took 352.930142ms to wait for: map[apiserver:true system_pods:true]
	I0408 12:32:02.960324  417387 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:32:02.964833  417387 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:32:02.964860  417387 node_conditions.go:123] node cpu capacity is 2
	I0408 12:32:02.964872  417387 node_conditions.go:105] duration metric: took 4.543056ms to run NodePressure ...
	I0408 12:32:02.964888  417387 start.go:240] waiting for startup goroutines ...
	I0408 12:32:03.006883  417387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:32:03.174986  417387 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:32:03.224896  417387 main.go:141] libmachine: Making call to close driver server
	I0408 12:32:03.224926  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .Close
	I0408 12:32:03.225291  417387 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:32:03.225316  417387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:32:03.225327  417387 main.go:141] libmachine: Making call to close driver server
	I0408 12:32:03.225336  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .Close
	I0408 12:32:03.225651  417387 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:32:03.225698  417387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:32:03.225698  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Closing plugin on server side
	I0408 12:32:03.233706  417387 main.go:141] libmachine: Making call to close driver server
	I0408 12:32:03.233734  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .Close
	I0408 12:32:03.234131  417387 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:32:03.234151  417387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:32:03.234172  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Closing plugin on server side
	I0408 12:32:03.911189  417387 main.go:141] libmachine: Making call to close driver server
	I0408 12:32:03.911219  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .Close
	I0408 12:32:03.911582  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) DBG | Closing plugin on server side
	I0408 12:32:03.911633  417387 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:32:03.911651  417387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:32:03.911664  417387 main.go:141] libmachine: Making call to close driver server
	I0408 12:32:03.911672  417387 main.go:141] libmachine: (kubernetes-upgrade-144569) Calling .Close
	I0408 12:32:03.911909  417387 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:32:03.911931  417387 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:32:03.914062  417387 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0408 12:32:03.915355  417387 addons.go:505] duration metric: took 1.307883556s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0408 12:32:03.915410  417387 start.go:245] waiting for cluster config update ...
	I0408 12:32:03.915427  417387 start.go:254] writing updated cluster config ...
	I0408 12:32:03.915787  417387 ssh_runner.go:195] Run: rm -f paused
	I0408 12:32:03.967524  417387 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0408 12:32:03.969506  417387 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-144569" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.873606694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712579524873516855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cab2399-b54f-4564-ae19-5ba59550d911 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.874256310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=831e0851-2eb5-4e94-b1d3-c84a2a1fe653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.874423877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=831e0851-2eb5-4e94-b1d3-c84a2a1fe653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.874890727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:850757b7673db1fb06c487dfb4a10eb34ded1ffcd6f8af2132c7e6145b8886ee,PodSandboxId:a95ee1137b202ef0ab2cf8911e62f0794ed1c8aa82f8b9123549038f171f4b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712579521405344983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sf82f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fde41c-94e0-4f1b-adbf-6980c19ee6fb,},Annotations:map[string]string{io.kubernetes.container.hash: 180cefac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a47e7b95c2e0759c5eb56b77281e816f89092954f93fccd095fcbd7010c5c7,PodSandboxId:2a9f609ee1494f268090ec919a0806c0c4af2fb29b5532e74eb2bf7dccfdece6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712579521367289085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: d4df2886-3560-46e4-970f-35af3f25354d,},Annotations:map[string]string{io.kubernetes.container.hash: 84d662a7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e85ef93864e67dc88afbffe7d7b012f9d9fad7718c80504b33aaca3b5d7df4,PodSandboxId:14c175529a05babb6a389514eb909779bdd81166bac9b378cd01910510ef19f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712579521326105817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ptd66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2dccfe48-bff6-4bf2-8a71-38b44b2ae86e,},Annotations:map[string]string{io.kubernetes.container.hash: fc0cedb0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a9eb1423e12e534499e07df0b1554f42bc1a79f9535cdbac40aea895344434,PodSandboxId:0ccb87901d04117ea00a83b5bf14cf436df8a369511fbe9d889eb7ba44361a36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Created
At:1712579516525241291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041cab7c2ab686abb207a0e57fa93e1e,},Annotations:map[string]string{io.kubernetes.container.hash: d63dd84d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a788cfdd4932ca975d4ccf40cfda8e6416a5ebf77de7ac027925e8b47f9c0c9,PodSandboxId:8082a8b86eccd3ceba8bc7859e3a399bb13553b9492e2cd091d9666ff29755e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712579516565347387,La
bels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c5ad5286245e5d018a61c4166db43c,},Annotations:map[string]string{io.kubernetes.container.hash: 254b67a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63024042682a98ca22830f941d3582ba2e3628e2ae1951594e4b2b8132a8083f,PodSandboxId:402172ac483c895a5921dfbfb8b4755117ad077d4de3e96d057f5504c1aa0b10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:171257951655111837
9,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4141a38370eb2b79aa720916c5b9b74d,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d2c1fc878ced0418fb1bd0804e06177534fe92b8010ff7559ba777c37ee4b2,PodSandboxId:93b7112c3b7edc3f9d865f75e37899a841cfd0c0d50048bdd86f25b3d8192ed2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:17125
79514340227638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94cc4350a2b7bf80788eecc114a97b,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc3b250a77eb94cea23b608338e366c5e42c748f9494431b26defa63e47ffad,PodSandboxId:2a9f609ee1494f268090ec919a0806c0c4af2fb29b5532e74eb2bf7dccfdece6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:171257
9513331800169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4df2886-3560-46e4-970f-35af3f25354d,},Annotations:map[string]string{io.kubernetes.container.hash: 84d662a7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cb346b13888e8f2c1130386e96fd0994a1b6548ba64d05a6d23dfd0744c6e1,PodSandboxId:658e4d884ba1e443d85df20b2688b0ab9e76154447253285b40a35e25c810b17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712579511334355513,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ln7vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f711f98-9437-464b-b70a-17f76823ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7419a7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894,PodSandboxId:a95ee1137b202ef0ab2cf8911e62f0794ed1c8aa82f8b9123549038f171f4b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712579498967260130,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sf82f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fde41c-94e0-4f1b-adbf-6980c19ee6fb,},Annotations:map[string]string{io.kubernetes.container.hash: 180cefac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d,PodSandboxId:14c175529a05babb6a389514eb909779bdd81166bac9b378cd01910510ef19f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712579498905104071,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ptd66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dccfe48-bff6-4bf2-8a71-38b44b2ae86e,},Annotations:map[string]string{io.kubernetes.container.hash: fc0cedb0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e76e1748bd482ed79118567cdb71722c9299f74bdea35ecd5f4224b7e0ace14,PodSandboxId:06fffa9cb01ec6eef0e88a2f363c14a1e4c7a0b1d8c02401fbc6e6c2d80
15bbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712579495673655997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041cab7c2ab686abb207a0e57fa93e1e,},Annotations:map[string]string{io.kubernetes.container.hash: d63dd84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811b06480d5713278b0f862923ed5e7ed52981e46a79678b32033194f38d70f7,PodSandboxId:05e3d408847689596af89d3fae1e6d31bfaea2a0abc14b87d14068b0d32a9f2b,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1712579495580028213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94cc4350a2b7bf80788eecc114a97b,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05b52699faeb00901ec432a2751f82b7b61fb76b0409e9032e2b91e72bdcf50a,PodSandboxId:6541c03075e408aa6316b117f8c0737a79940d3102b420d430ea513fecb2b360,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1712579495437258763,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4141a38370eb2b79aa720916c5b9b74d,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ac7c2b9d4dbd4d7e388c5654605a00b73e643bb6d1510dd020e2ac2d094bcd,PodSandboxId:052f94812f70c2affeb3c0299efc93e7ec436efc82708e0edbe2bf46497603e8,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1712579495367642807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c5ad5286245e5d018a61c4166db43c,},Annotations:map[string]string{io.kubernetes.container.hash: 254b67a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7590a189986a440dbe63546a67ee25f73aaff77394aebe65c265c3ff03b7fe,PodSandboxId:4e02197621a0ab42d4e553d4ba82408d7b97d80fe90b317e7a4e27ba9fb32324,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1712579494980393638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ln7vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f711f98-9437-464b-b70a-17f76823ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7419a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=831e0851-2eb5-4e94-b1d3-c84a2a1fe653 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.961260119Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73d1e696-6f26-45b0-b4a9-5fc75d5bb673 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.961418590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73d1e696-6f26-45b0-b4a9-5fc75d5bb673 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.963260472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efea00d5-6904-49e7-afa9-ff6bc7d49d50 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.963915745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712579524963882246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efea00d5-6904-49e7-afa9-ff6bc7d49d50 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.974869388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6b7de2d-1bdc-4efe-859e-4d6d7fa153d6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.974974732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6b7de2d-1bdc-4efe-859e-4d6d7fa153d6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:04 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:04.975747133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:850757b7673db1fb06c487dfb4a10eb34ded1ffcd6f8af2132c7e6145b8886ee,PodSandboxId:a95ee1137b202ef0ab2cf8911e62f0794ed1c8aa82f8b9123549038f171f4b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712579521405344983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sf82f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fde41c-94e0-4f1b-adbf-6980c19ee6fb,},Annotations:map[string]string{io.kubernetes.container.hash: 180cefac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a47e7b95c2e0759c5eb56b77281e816f89092954f93fccd095fcbd7010c5c7,PodSandboxId:2a9f609ee1494f268090ec919a0806c0c4af2fb29b5532e74eb2bf7dccfdece6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712579521367289085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: d4df2886-3560-46e4-970f-35af3f25354d,},Annotations:map[string]string{io.kubernetes.container.hash: 84d662a7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e85ef93864e67dc88afbffe7d7b012f9d9fad7718c80504b33aaca3b5d7df4,PodSandboxId:14c175529a05babb6a389514eb909779bdd81166bac9b378cd01910510ef19f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712579521326105817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ptd66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2dccfe48-bff6-4bf2-8a71-38b44b2ae86e,},Annotations:map[string]string{io.kubernetes.container.hash: fc0cedb0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a9eb1423e12e534499e07df0b1554f42bc1a79f9535cdbac40aea895344434,PodSandboxId:0ccb87901d04117ea00a83b5bf14cf436df8a369511fbe9d889eb7ba44361a36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Created
At:1712579516525241291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041cab7c2ab686abb207a0e57fa93e1e,},Annotations:map[string]string{io.kubernetes.container.hash: d63dd84d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a788cfdd4932ca975d4ccf40cfda8e6416a5ebf77de7ac027925e8b47f9c0c9,PodSandboxId:8082a8b86eccd3ceba8bc7859e3a399bb13553b9492e2cd091d9666ff29755e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712579516565347387,La
bels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c5ad5286245e5d018a61c4166db43c,},Annotations:map[string]string{io.kubernetes.container.hash: 254b67a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63024042682a98ca22830f941d3582ba2e3628e2ae1951594e4b2b8132a8083f,PodSandboxId:402172ac483c895a5921dfbfb8b4755117ad077d4de3e96d057f5504c1aa0b10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:171257951655111837
9,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4141a38370eb2b79aa720916c5b9b74d,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d2c1fc878ced0418fb1bd0804e06177534fe92b8010ff7559ba777c37ee4b2,PodSandboxId:93b7112c3b7edc3f9d865f75e37899a841cfd0c0d50048bdd86f25b3d8192ed2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:17125
79514340227638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94cc4350a2b7bf80788eecc114a97b,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc3b250a77eb94cea23b608338e366c5e42c748f9494431b26defa63e47ffad,PodSandboxId:2a9f609ee1494f268090ec919a0806c0c4af2fb29b5532e74eb2bf7dccfdece6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:171257
9513331800169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4df2886-3560-46e4-970f-35af3f25354d,},Annotations:map[string]string{io.kubernetes.container.hash: 84d662a7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cb346b13888e8f2c1130386e96fd0994a1b6548ba64d05a6d23dfd0744c6e1,PodSandboxId:658e4d884ba1e443d85df20b2688b0ab9e76154447253285b40a35e25c810b17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712579511334355513,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ln7vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f711f98-9437-464b-b70a-17f76823ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7419a7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894,PodSandboxId:a95ee1137b202ef0ab2cf8911e62f0794ed1c8aa82f8b9123549038f171f4b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712579498967260130,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sf82f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fde41c-94e0-4f1b-adbf-6980c19ee6fb,},Annotations:map[string]string{io.kubernetes.container.hash: 180cefac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d,PodSandboxId:14c175529a05babb6a389514eb909779bdd81166bac9b378cd01910510ef19f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712579498905104071,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ptd66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dccfe48-bff6-4bf2-8a71-38b44b2ae86e,},Annotations:map[string]string{io.kubernetes.container.hash: fc0cedb0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e76e1748bd482ed79118567cdb71722c9299f74bdea35ecd5f4224b7e0ace14,PodSandboxId:06fffa9cb01ec6eef0e88a2f363c14a1e4c7a0b1d8c02401fbc6e6c2d80
15bbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712579495673655997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041cab7c2ab686abb207a0e57fa93e1e,},Annotations:map[string]string{io.kubernetes.container.hash: d63dd84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811b06480d5713278b0f862923ed5e7ed52981e46a79678b32033194f38d70f7,PodSandboxId:05e3d408847689596af89d3fae1e6d31bfaea2a0abc14b87d14068b0d32a9f2b,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1712579495580028213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94cc4350a2b7bf80788eecc114a97b,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05b52699faeb00901ec432a2751f82b7b61fb76b0409e9032e2b91e72bdcf50a,PodSandboxId:6541c03075e408aa6316b117f8c0737a79940d3102b420d430ea513fecb2b360,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1712579495437258763,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4141a38370eb2b79aa720916c5b9b74d,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ac7c2b9d4dbd4d7e388c5654605a00b73e643bb6d1510dd020e2ac2d094bcd,PodSandboxId:052f94812f70c2affeb3c0299efc93e7ec436efc82708e0edbe2bf46497603e8,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1712579495367642807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c5ad5286245e5d018a61c4166db43c,},Annotations:map[string]string{io.kubernetes.container.hash: 254b67a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7590a189986a440dbe63546a67ee25f73aaff77394aebe65c265c3ff03b7fe,PodSandboxId:4e02197621a0ab42d4e553d4ba82408d7b97d80fe90b317e7a4e27ba9fb32324,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1712579494980393638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ln7vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f711f98-9437-464b-b70a-17f76823ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7419a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6b7de2d-1bdc-4efe-859e-4d6d7fa153d6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.058117999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dc1db3a-394b-4045-be89-75805f3a4656 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.058255564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dc1db3a-394b-4045-be89-75805f3a4656 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.061051786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94ceb53c-8e8e-4d5b-b8f4-7f7925356422 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.063054443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712579525062954778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94ceb53c-8e8e-4d5b-b8f4-7f7925356422 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.067996004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59eb2b17-3f00-4762-86c3-0839170bb6fb name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.068110728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59eb2b17-3f00-4762-86c3-0839170bb6fb name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.076050987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:850757b7673db1fb06c487dfb4a10eb34ded1ffcd6f8af2132c7e6145b8886ee,PodSandboxId:a95ee1137b202ef0ab2cf8911e62f0794ed1c8aa82f8b9123549038f171f4b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712579521405344983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sf82f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fde41c-94e0-4f1b-adbf-6980c19ee6fb,},Annotations:map[string]string{io.kubernetes.container.hash: 180cefac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a47e7b95c2e0759c5eb56b77281e816f89092954f93fccd095fcbd7010c5c7,PodSandboxId:2a9f609ee1494f268090ec919a0806c0c4af2fb29b5532e74eb2bf7dccfdece6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712579521367289085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: d4df2886-3560-46e4-970f-35af3f25354d,},Annotations:map[string]string{io.kubernetes.container.hash: 84d662a7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e85ef93864e67dc88afbffe7d7b012f9d9fad7718c80504b33aaca3b5d7df4,PodSandboxId:14c175529a05babb6a389514eb909779bdd81166bac9b378cd01910510ef19f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712579521326105817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ptd66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2dccfe48-bff6-4bf2-8a71-38b44b2ae86e,},Annotations:map[string]string{io.kubernetes.container.hash: fc0cedb0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a9eb1423e12e534499e07df0b1554f42bc1a79f9535cdbac40aea895344434,PodSandboxId:0ccb87901d04117ea00a83b5bf14cf436df8a369511fbe9d889eb7ba44361a36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Created
At:1712579516525241291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041cab7c2ab686abb207a0e57fa93e1e,},Annotations:map[string]string{io.kubernetes.container.hash: d63dd84d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a788cfdd4932ca975d4ccf40cfda8e6416a5ebf77de7ac027925e8b47f9c0c9,PodSandboxId:8082a8b86eccd3ceba8bc7859e3a399bb13553b9492e2cd091d9666ff29755e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712579516565347387,La
bels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c5ad5286245e5d018a61c4166db43c,},Annotations:map[string]string{io.kubernetes.container.hash: 254b67a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63024042682a98ca22830f941d3582ba2e3628e2ae1951594e4b2b8132a8083f,PodSandboxId:402172ac483c895a5921dfbfb8b4755117ad077d4de3e96d057f5504c1aa0b10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:171257951655111837
9,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4141a38370eb2b79aa720916c5b9b74d,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d2c1fc878ced0418fb1bd0804e06177534fe92b8010ff7559ba777c37ee4b2,PodSandboxId:93b7112c3b7edc3f9d865f75e37899a841cfd0c0d50048bdd86f25b3d8192ed2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:17125
79514340227638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94cc4350a2b7bf80788eecc114a97b,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc3b250a77eb94cea23b608338e366c5e42c748f9494431b26defa63e47ffad,PodSandboxId:2a9f609ee1494f268090ec919a0806c0c4af2fb29b5532e74eb2bf7dccfdece6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:171257
9513331800169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4df2886-3560-46e4-970f-35af3f25354d,},Annotations:map[string]string{io.kubernetes.container.hash: 84d662a7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cb346b13888e8f2c1130386e96fd0994a1b6548ba64d05a6d23dfd0744c6e1,PodSandboxId:658e4d884ba1e443d85df20b2688b0ab9e76154447253285b40a35e25c810b17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712579511334355513,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ln7vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f711f98-9437-464b-b70a-17f76823ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7419a7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894,PodSandboxId:a95ee1137b202ef0ab2cf8911e62f0794ed1c8aa82f8b9123549038f171f4b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712579498967260130,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sf82f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fde41c-94e0-4f1b-adbf-6980c19ee6fb,},Annotations:map[string]string{io.kubernetes.container.hash: 180cefac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d,PodSandboxId:14c175529a05babb6a389514eb909779bdd81166bac9b378cd01910510ef19f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712579498905104071,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ptd66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dccfe48-bff6-4bf2-8a71-38b44b2ae86e,},Annotations:map[string]string{io.kubernetes.container.hash: fc0cedb0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e76e1748bd482ed79118567cdb71722c9299f74bdea35ecd5f4224b7e0ace14,PodSandboxId:06fffa9cb01ec6eef0e88a2f363c14a1e4c7a0b1d8c02401fbc6e6c2d80
15bbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712579495673655997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041cab7c2ab686abb207a0e57fa93e1e,},Annotations:map[string]string{io.kubernetes.container.hash: d63dd84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811b06480d5713278b0f862923ed5e7ed52981e46a79678b32033194f38d70f7,PodSandboxId:05e3d408847689596af89d3fae1e6d31bfaea2a0abc14b87d14068b0d32a9f2b,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1712579495580028213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94cc4350a2b7bf80788eecc114a97b,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05b52699faeb00901ec432a2751f82b7b61fb76b0409e9032e2b91e72bdcf50a,PodSandboxId:6541c03075e408aa6316b117f8c0737a79940d3102b420d430ea513fecb2b360,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1712579495437258763,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4141a38370eb2b79aa720916c5b9b74d,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ac7c2b9d4dbd4d7e388c5654605a00b73e643bb6d1510dd020e2ac2d094bcd,PodSandboxId:052f94812f70c2affeb3c0299efc93e7ec436efc82708e0edbe2bf46497603e8,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1712579495367642807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c5ad5286245e5d018a61c4166db43c,},Annotations:map[string]string{io.kubernetes.container.hash: 254b67a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7590a189986a440dbe63546a67ee25f73aaff77394aebe65c265c3ff03b7fe,PodSandboxId:4e02197621a0ab42d4e553d4ba82408d7b97d80fe90b317e7a4e27ba9fb32324,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1712579494980393638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ln7vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f711f98-9437-464b-b70a-17f76823ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7419a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59eb2b17-3f00-4762-86c3-0839170bb6fb name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.165263567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a735205-9dce-488c-bfc7-7c0397611278 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.165368316Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a735205-9dce-488c-bfc7-7c0397611278 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.170506011Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33ce32ca-edf1-4355-9e9d-fce8ece53cd5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.171163691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712579525171126991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33ce32ca-edf1-4355-9e9d-fce8ece53cd5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.172462280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdd2dd11-2b36-4bea-8967-708033891ce4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.173216433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdd2dd11-2b36-4bea-8967-708033891ce4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:32:05 kubernetes-upgrade-144569 crio[3002]: time="2024-04-08 12:32:05.174440319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:850757b7673db1fb06c487dfb4a10eb34ded1ffcd6f8af2132c7e6145b8886ee,PodSandboxId:a95ee1137b202ef0ab2cf8911e62f0794ed1c8aa82f8b9123549038f171f4b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712579521405344983,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sf82f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fde41c-94e0-4f1b-adbf-6980c19ee6fb,},Annotations:map[string]string{io.kubernetes.container.hash: 180cefac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a47e7b95c2e0759c5eb56b77281e816f89092954f93fccd095fcbd7010c5c7,PodSandboxId:2a9f609ee1494f268090ec919a0806c0c4af2fb29b5532e74eb2bf7dccfdece6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712579521367289085,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: d4df2886-3560-46e4-970f-35af3f25354d,},Annotations:map[string]string{io.kubernetes.container.hash: 84d662a7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e85ef93864e67dc88afbffe7d7b012f9d9fad7718c80504b33aaca3b5d7df4,PodSandboxId:14c175529a05babb6a389514eb909779bdd81166bac9b378cd01910510ef19f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712579521326105817,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ptd66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2dccfe48-bff6-4bf2-8a71-38b44b2ae86e,},Annotations:map[string]string{io.kubernetes.container.hash: fc0cedb0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a9eb1423e12e534499e07df0b1554f42bc1a79f9535cdbac40aea895344434,PodSandboxId:0ccb87901d04117ea00a83b5bf14cf436df8a369511fbe9d889eb7ba44361a36,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Created
At:1712579516525241291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041cab7c2ab686abb207a0e57fa93e1e,},Annotations:map[string]string{io.kubernetes.container.hash: d63dd84d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a788cfdd4932ca975d4ccf40cfda8e6416a5ebf77de7ac027925e8b47f9c0c9,PodSandboxId:8082a8b86eccd3ceba8bc7859e3a399bb13553b9492e2cd091d9666ff29755e6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712579516565347387,La
bels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c5ad5286245e5d018a61c4166db43c,},Annotations:map[string]string{io.kubernetes.container.hash: 254b67a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63024042682a98ca22830f941d3582ba2e3628e2ae1951594e4b2b8132a8083f,PodSandboxId:402172ac483c895a5921dfbfb8b4755117ad077d4de3e96d057f5504c1aa0b10,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:171257951655111837
9,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4141a38370eb2b79aa720916c5b9b74d,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d2c1fc878ced0418fb1bd0804e06177534fe92b8010ff7559ba777c37ee4b2,PodSandboxId:93b7112c3b7edc3f9d865f75e37899a841cfd0c0d50048bdd86f25b3d8192ed2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:17125
79514340227638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94cc4350a2b7bf80788eecc114a97b,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cc3b250a77eb94cea23b608338e366c5e42c748f9494431b26defa63e47ffad,PodSandboxId:2a9f609ee1494f268090ec919a0806c0c4af2fb29b5532e74eb2bf7dccfdece6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:171257
9513331800169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4df2886-3560-46e4-970f-35af3f25354d,},Annotations:map[string]string{io.kubernetes.container.hash: 84d662a7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6cb346b13888e8f2c1130386e96fd0994a1b6548ba64d05a6d23dfd0744c6e1,PodSandboxId:658e4d884ba1e443d85df20b2688b0ab9e76154447253285b40a35e25c810b17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712579511334355513,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ln7vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f711f98-9437-464b-b70a-17f76823ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7419a7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894,PodSandboxId:a95ee1137b202ef0ab2cf8911e62f0794ed1c8aa82f8b9123549038f171f4b99,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712579498967260130,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sf82f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14fde41c-94e0-4f1b-adbf-6980c19ee6fb,},Annotations:map[string]string{io.kubernetes.container.hash: 180cefac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d,PodSandboxId:14c175529a05babb6a389514eb909779bdd81166bac9b378cd01910510ef19f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1712579498905104071,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ptd66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dccfe48-bff6-4bf2-8a71-38b44b2ae86e,},Annotations:map[string]string{io.kubernetes.container.hash: fc0cedb0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e76e1748bd482ed79118567cdb71722c9299f74bdea35ecd5f4224b7e0ace14,PodSandboxId:06fffa9cb01ec6eef0e88a2f363c14a1e4c7a0b1d8c02401fbc6e6c2d80
15bbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1712579495673655997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041cab7c2ab686abb207a0e57fa93e1e,},Annotations:map[string]string{io.kubernetes.container.hash: d63dd84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:811b06480d5713278b0f862923ed5e7ed52981e46a79678b32033194f38d70f7,PodSandboxId:05e3d408847689596af89d3fae1e6d31bfaea2a0abc14b87d14068b0d32a9f2b,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1712579495580028213,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf94cc4350a2b7bf80788eecc114a97b,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05b52699faeb00901ec432a2751f82b7b61fb76b0409e9032e2b91e72bdcf50a,PodSandboxId:6541c03075e408aa6316b117f8c0737a79940d3102b420d430ea513fecb2b360,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1712579495437258763,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4141a38370eb2b79aa720916c5b9b74d,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9ac7c2b9d4dbd4d7e388c5654605a00b73e643bb6d1510dd020e2ac2d094bcd,PodSandboxId:052f94812f70c2affeb3c0299efc93e7ec436efc82708e0edbe2bf46497603e8,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1712579495367642807,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-144569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c5ad5286245e5d018a61c4166db43c,},Annotations:map[string]string{io.kubernetes.container.hash: 254b67a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a7590a189986a440dbe63546a67ee25f73aaff77394aebe65c265c3ff03b7fe,PodSandboxId:4e02197621a0ab42d4e553d4ba82408d7b97d80fe90b317e7a4e27ba9fb32324,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1712579494980393638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ln7vt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f711f98-9437-464b-b70a-17f76823ab84,},Annotations:map[string]string{io.kubernetes.container.hash: 2d7419a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdd2dd11-2b36-4bea-8967-708033891ce4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	850757b7673db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   a95ee1137b202       coredns-7db6d8ff4d-sf82f
	56a47e7b95c2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   2a9f609ee1494       storage-provisioner
	05e85ef93864e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   14c175529a05b       coredns-7db6d8ff4d-ptd66
	4a788cfdd4932       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3   8 seconds ago       Running             kube-apiserver            2                   8082a8b86eccd       kube-apiserver-kubernetes-upgrade-144569
	63024042682a9       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a   8 seconds ago       Running             kube-controller-manager   2                   402172ac483c8       kube-controller-manager-kubernetes-upgrade-144569
	37a9eb1423e12       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago       Running             etcd                      2                   0ccb87901d041       etcd-kubernetes-upgrade-144569
	a9d2c1fc878ce       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5   10 seconds ago      Running             kube-scheduler            2                   93b7112c3b7ed       kube-scheduler-kubernetes-upgrade-144569
	2cc3b250a77eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       2                   2a9f609ee1494       storage-provisioner
	c6cb346b13888       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652   13 seconds ago      Running             kube-proxy                2                   658e4d884ba1e       kube-proxy-ln7vt
	2ee46650d706f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago      Exited              coredns                   1                   a95ee1137b202       coredns-7db6d8ff4d-sf82f
	2edebe1dba2a8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago      Exited              coredns                   1                   14c175529a05b       coredns-7db6d8ff4d-ptd66
	2e76e1748bd48       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Exited              etcd                      1                   06fffa9cb01ec       etcd-kubernetes-upgrade-144569
	811b06480d571       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5   29 seconds ago      Exited              kube-scheduler            1                   05e3d40884768       kube-scheduler-kubernetes-upgrade-144569
	05b52699faeb0       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a   29 seconds ago      Exited              kube-controller-manager   1                   6541c03075e40       kube-controller-manager-kubernetes-upgrade-144569
	b9ac7c2b9d4db       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3   29 seconds ago      Exited              kube-apiserver            1                   052f94812f70c       kube-apiserver-kubernetes-upgrade-144569
	7a7590a189986       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652   30 seconds ago      Exited              kube-proxy                1                   4e02197621a0a       kube-proxy-ln7vt
	
	
	==> coredns [05e85ef93864e67dc88afbffe7d7b012f9d9fad7718c80504b33aaca3b5d7df4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [850757b7673db1fb06c487dfb4a10eb34ded1ffcd6f8af2132c7e6145b8886ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-144569
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-144569
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:31:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-144569
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 12:32:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 12:32:00 +0000   Mon, 08 Apr 2024 12:31:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 12:32:00 +0000   Mon, 08 Apr 2024 12:31:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 12:32:00 +0000   Mon, 08 Apr 2024 12:31:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 12:32:00 +0000   Mon, 08 Apr 2024 12:31:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.62
	  Hostname:    kubernetes-upgrade-144569
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 12fe8e49414f4120a79841358391af84
	  System UUID:                12fe8e49-414f-4120-a798-41358391af84
	  Boot ID:                    13022407-00a8-4235-b5cc-250960592df7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-ptd66                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     44s
	  kube-system                 coredns-7db6d8ff4d-sf82f                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     44s
	  kube-system                 etcd-kubernetes-upgrade-144569                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         52s
	  kube-system                 kube-apiserver-kubernetes-upgrade-144569             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-144569    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-proxy-ln7vt                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-scheduler-kubernetes-upgrade-144569             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node kubernetes-upgrade-144569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x7 over 64s)  kubelet          Node kubernetes-upgrade-144569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node kubernetes-upgrade-144569 status is now: NodeHasSufficientMemory
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           45s                node-controller  Node kubernetes-upgrade-144569 event: Registered Node kubernetes-upgrade-144569 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-144569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-144569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-144569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.998993] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.061674] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075061] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.173168] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.175824] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.344309] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +5.073472] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.065556] kauditd_printk_skb: 130 callbacks suppressed
	[Apr 8 12:31] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[ +10.020058] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.095723] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.393806] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.691662] systemd-fstab-generator[2185]: Ignoring "noauto" option for root device
	[  +0.099029] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.074659] systemd-fstab-generator[2197]: Ignoring "noauto" option for root device
	[  +0.665411] systemd-fstab-generator[2398]: Ignoring "noauto" option for root device
	[  +0.300884] systemd-fstab-generator[2492]: Ignoring "noauto" option for root device
	[  +0.961558] systemd-fstab-generator[2852]: Ignoring "noauto" option for root device
	[  +1.398278] systemd-fstab-generator[3222]: Ignoring "noauto" option for root device
	[ +12.508611] kauditd_printk_skb: 300 callbacks suppressed
	[  +5.454005] systemd-fstab-generator[4058]: Ignoring "noauto" option for root device
	[  +0.094486] kauditd_printk_skb: 7 callbacks suppressed
	[Apr 8 12:32] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.305548] systemd-fstab-generator[4547]: Ignoring "noauto" option for root device
	
	
	==> etcd [2e76e1748bd482ed79118567cdb71722c9299f74bdea35ecd5f4224b7e0ace14] <==
	{"level":"warn","ts":"2024-04-08T12:31:36.439755Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-04-08T12:31:36.439949Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.62:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.62:2380","--initial-cluster=kubernetes-upgrade-144569=https://192.168.50.62:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.62:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.62:2380","--name=kubernetes-upgrade-144569","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot
-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-04-08T12:31:36.440064Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-04-08T12:31:36.440114Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-04-08T12:31:36.440142Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.62:2380"]}
	{"level":"info","ts":"2024-04-08T12:31:36.440193Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T12:31:36.442755Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.62:2379"]}
	{"level":"info","ts":"2024-04-08T12:31:36.44572Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-144569","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.62:2380"],"listen-peer-urls":["https://192.168.50.62:2380"],"advertise-client-urls":["https://192.168.50.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","in
itial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-04-08T12:31:36.521385Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"75.425686ms"}
	
	
	==> etcd [37a9eb1423e12e534499e07df0b1554f42bc1a79f9535cdbac40aea895344434] <==
	{"level":"info","ts":"2024-04-08T12:31:57.060403Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T12:31:57.065338Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.62:2380"}
	{"level":"info","ts":"2024-04-08T12:31:57.065384Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.62:2380"}
	{"level":"info","ts":"2024-04-08T12:31:57.065398Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-08T12:31:57.065494Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-08T12:31:57.067599Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-08T12:31:57.067663Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-08T12:31:57.068475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 switched to configuration voters=(5247593733537193879)"}
	{"level":"info","ts":"2024-04-08T12:31:57.070645Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","added-peer-id":"48d332b29d0cdf97","added-peer-peer-urls":["https://192.168.50.62:2380"]}
	{"level":"info","ts":"2024-04-08T12:31:57.070931Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4f4301e400b1ef13","local-member-id":"48d332b29d0cdf97","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:31:57.070978Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:31:58.916403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-08T12:31:58.916517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-08T12:31:58.916604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 received MsgPreVoteResp from 48d332b29d0cdf97 at term 2"}
	{"level":"info","ts":"2024-04-08T12:31:58.916623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became candidate at term 3"}
	{"level":"info","ts":"2024-04-08T12:31:58.916629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 received MsgVoteResp from 48d332b29d0cdf97 at term 3"}
	{"level":"info","ts":"2024-04-08T12:31:58.916637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"48d332b29d0cdf97 became leader at term 3"}
	{"level":"info","ts":"2024-04-08T12:31:58.916644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 48d332b29d0cdf97 elected leader 48d332b29d0cdf97 at term 3"}
	{"level":"info","ts":"2024-04-08T12:31:58.923356Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:31:58.925393Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.62:2379"}
	{"level":"info","ts":"2024-04-08T12:31:58.925636Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"48d332b29d0cdf97","local-member-attributes":"{Name:kubernetes-upgrade-144569 ClientURLs:[https://192.168.50.62:2379]}","request-path":"/0/members/48d332b29d0cdf97/attributes","cluster-id":"4f4301e400b1ef13","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T12:31:58.92583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:31:58.926172Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:31:58.926236Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T12:31:58.92829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:32:05 up 1 min,  0 users,  load average: 1.21, 0.40, 0.14
	Linux kubernetes-upgrade-144569 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4a788cfdd4932ca975d4ccf40cfda8e6416a5ebf77de7ac027925e8b47f9c0c9] <==
	I0408 12:32:00.332140       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0408 12:32:00.332277       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0408 12:32:00.412682       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0408 12:32:00.412791       1 policy_source.go:224] refreshing policies
	I0408 12:32:00.432145       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0408 12:32:00.433734       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0408 12:32:00.439837       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0408 12:32:00.442038       1 aggregator.go:165] initial CRD sync complete...
	I0408 12:32:00.442136       1 autoregister_controller.go:141] Starting autoregister controller
	I0408 12:32:00.442163       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0408 12:32:00.442193       1 cache.go:39] Caches are synced for autoregister controller
	I0408 12:32:00.502376       1 shared_informer.go:320] Caches are synced for configmaps
	I0408 12:32:00.504081       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0408 12:32:00.514095       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0408 12:32:00.514191       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0408 12:32:00.524234       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0408 12:32:00.540661       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0408 12:32:00.551139       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0408 12:32:01.306247       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0408 12:32:01.619390       1 controller.go:615] quota admission added evaluator for: endpoints
	I0408 12:32:02.373221       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0408 12:32:02.387870       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0408 12:32:02.457152       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0408 12:32:02.532051       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0408 12:32:02.554100       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [b9ac7c2b9d4dbd4d7e388c5654605a00b73e643bb6d1510dd020e2ac2d094bcd] <==
	I0408 12:31:36.223031       1 options.go:221] external host was not specified, using 192.168.50.62
	I0408 12:31:36.225087       1 server.go:148] Version: v1.30.0-rc.0
	I0408 12:31:36.225171       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0408 12:31:36.730923       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0408 12:31:36.731443       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0408 12:31:36.731468       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0408 12:31:36.740846       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0408 12:31:36.743590       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0408 12:31:36.743612       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0408 12:31:36.743790       1 instance.go:299] Using reconciler: lease
	W0408 12:31:36.744708       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [05b52699faeb00901ec432a2751f82b7b61fb76b0409e9032e2b91e72bdcf50a] <==
	
	
	==> kube-controller-manager [63024042682a98ca22830f941d3582ba2e3628e2ae1951594e4b2b8132a8083f] <==
	I0408 12:32:02.652646       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0408 12:32:02.652676       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0408 12:32:02.689449       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0408 12:32:02.689640       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0408 12:32:02.689657       1 shared_informer.go:313] Waiting for caches to sync for job
	I0408 12:32:02.716491       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0408 12:32:02.716735       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0408 12:32:02.716754       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0408 12:32:02.912357       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0408 12:32:02.912416       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0408 12:32:02.912425       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0408 12:32:02.962234       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0408 12:32:02.962363       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0408 12:32:02.962399       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0408 12:32:03.010954       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0408 12:32:03.011034       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0408 12:32:03.061732       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0408 12:32:03.061794       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0408 12:32:03.061804       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0408 12:32:03.061864       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0408 12:32:03.061901       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0408 12:32:03.061910       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0408 12:32:03.112189       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0408 12:32:03.112345       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0408 12:32:03.112394       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	
	
	==> kube-proxy [7a7590a189986a440dbe63546a67ee25f73aaff77394aebe65c265c3ff03b7fe] <==
	
	
	==> kube-proxy [c6cb346b13888e8f2c1130386e96fd0994a1b6548ba64d05a6d23dfd0744c6e1] <==
	I0408 12:31:51.509410       1 server_linux.go:69] "Using iptables proxy"
	E0408 12:31:51.511957       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-144569\": dial tcp 192.168.50.62:8443: connect: connection refused"
	E0408 12:31:52.679963       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-144569\": dial tcp 192.168.50.62:8443: connect: connection refused"
	E0408 12:31:55.061028       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-144569\": dial tcp 192.168.50.62:8443: connect: connection refused"
	I0408 12:32:00.471401       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.62"]
	I0408 12:32:00.546681       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0408 12:32:00.547651       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:32:00.547832       1 server_linux.go:165] "Using iptables Proxier"
	I0408 12:32:00.551346       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:32:00.551709       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0408 12:32:00.553616       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:32:00.554775       1 config.go:192] "Starting service config controller"
	I0408 12:32:00.554822       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0408 12:32:00.554859       1 config.go:101] "Starting endpoint slice config controller"
	I0408 12:32:00.554874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0408 12:32:00.555351       1 config.go:319] "Starting node config controller"
	I0408 12:32:00.556741       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0408 12:32:00.655425       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0408 12:32:00.655612       1 shared_informer.go:320] Caches are synced for service config
	I0408 12:32:00.656927       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [811b06480d5713278b0f862923ed5e7ed52981e46a79678b32033194f38d70f7] <==
	
	
	==> kube-scheduler [a9d2c1fc878ced0418fb1bd0804e06177534fe92b8010ff7559ba777c37ee4b2] <==
	I0408 12:31:57.640277       1 serving.go:380] Generated self-signed cert in-memory
	W0408 12:32:00.418641       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0408 12:32:00.418743       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:32:00.418754       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 12:32:00.418761       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 12:32:00.503664       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.0"
	I0408 12:32:00.506788       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:32:00.513606       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0408 12:32:00.520665       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0408 12:32:00.520760       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0408 12:32:00.520793       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 12:32:00.620973       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 12:31:56 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:31:56.523335    4065 scope.go:117] "RemoveContainer" containerID="05b52699faeb00901ec432a2751f82b7b61fb76b0409e9032e2b91e72bdcf50a"
	Apr 08 12:31:56 kubernetes-upgrade-144569 kubelet[4065]: E0408 12:31:56.620116    4065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-144569?timeout=10s\": dial tcp 192.168.50.62:8443: connect: connection refused" interval="800ms"
	Apr 08 12:31:56 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:31:56.726261    4065 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-144569"
	Apr 08 12:31:56 kubernetes-upgrade-144569 kubelet[4065]: E0408 12:31:56.730394    4065 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.62:8443: connect: connection refused" node="kubernetes-upgrade-144569"
	Apr 08 12:31:56 kubernetes-upgrade-144569 kubelet[4065]: W0408 12:31:56.808124    4065 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-144569&limit=500&resourceVersion=0": dial tcp 192.168.50.62:8443: connect: connection refused
	Apr 08 12:31:56 kubernetes-upgrade-144569 kubelet[4065]: E0408 12:31:56.808196    4065 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-144569&limit=500&resourceVersion=0": dial tcp 192.168.50.62:8443: connect: connection refused
	Apr 08 12:31:57 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:31:57.532588    4065 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-144569"
	Apr 08 12:32:00 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:00.480826    4065 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-144569"
	Apr 08 12:32:00 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:00.481303    4065 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-144569"
	Apr 08 12:32:00 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:00.485197    4065 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 08 12:32:00 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:00.486439    4065 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 08 12:32:00 kubernetes-upgrade-144569 kubelet[4065]: E0408 12:32:00.547304    4065 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-144569\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-144569"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.004434    4065 apiserver.go:52] "Watching apiserver"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.009617    4065 topology_manager.go:215] "Topology Admit Handler" podUID="d4df2886-3560-46e4-970f-35af3f25354d" podNamespace="kube-system" podName="storage-provisioner"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.009740    4065 topology_manager.go:215] "Topology Admit Handler" podUID="3f711f98-9437-464b-b70a-17f76823ab84" podNamespace="kube-system" podName="kube-proxy-ln7vt"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.009799    4065 topology_manager.go:215] "Topology Admit Handler" podUID="2dccfe48-bff6-4bf2-8a71-38b44b2ae86e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ptd66"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.009838    4065 topology_manager.go:215] "Topology Admit Handler" podUID="14fde41c-94e0-4f1b-adbf-6980c19ee6fb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sf82f"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.012492    4065 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.081628    4065 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d4df2886-3560-46e4-970f-35af3f25354d-tmp\") pod \"storage-provisioner\" (UID: \"d4df2886-3560-46e4-970f-35af3f25354d\") " pod="kube-system/storage-provisioner"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.082026    4065 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f711f98-9437-464b-b70a-17f76823ab84-lib-modules\") pod \"kube-proxy-ln7vt\" (UID: \"3f711f98-9437-464b-b70a-17f76823ab84\") " pod="kube-system/kube-proxy-ln7vt"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.082269    4065 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f711f98-9437-464b-b70a-17f76823ab84-xtables-lock\") pod \"kube-proxy-ln7vt\" (UID: \"3f711f98-9437-464b-b70a-17f76823ab84\") " pod="kube-system/kube-proxy-ln7vt"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.311134    4065 scope.go:117] "RemoveContainer" containerID="2edebe1dba2a831cfdf824fd1704745b586753202c126289b60eebda83f3619d"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.311485    4065 scope.go:117] "RemoveContainer" containerID="2cc3b250a77eb94cea23b608338e366c5e42c748f9494431b26defa63e47ffad"
	Apr 08 12:32:01 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:01.312691    4065 scope.go:117] "RemoveContainer" containerID="2ee46650d706fded7d22efc220b9b30b6277eddcca1ae69049cbc36bf41c8894"
	Apr 08 12:32:03 kubernetes-upgrade-144569 kubelet[4065]: I0408 12:32:03.567487    4065 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [2cc3b250a77eb94cea23b608338e366c5e42c748f9494431b26defa63e47ffad] <==
	I0408 12:31:53.436800       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0408 12:31:53.439476       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [56a47e7b95c2e0759c5eb56b77281e816f89092954f93fccd095fcbd7010c5c7] <==
	I0408 12:32:01.559443       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 12:32:01.602855       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 12:32:01.603116       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 12:32:01.626781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 12:32:01.627791       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-144569_01e02745-6c86-4eb2-b2e6-17e0fde294c8!
	I0408 12:32:01.628656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c26e91c7-5c5a-4496-ab19-8b42b47ab936", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-144569_01e02745-6c86-4eb2-b2e6-17e0fde294c8 became leader
	I0408 12:32:01.728589       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-144569_01e02745-6c86-4eb2-b2e6-17e0fde294c8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-144569 -n kubernetes-upgrade-144569
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-144569 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-144569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-144569
--- FAIL: TestKubernetesUpgrade (420.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (275.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-384148 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-384148 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m35.067102255s)

                                                
                                                
-- stdout --
	* [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:36:40.868785  427857 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:36:40.868968  427857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:36:40.868980  427857 out.go:304] Setting ErrFile to fd 2...
	I0408 12:36:40.868987  427857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:36:40.869785  427857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:36:40.870963  427857 out.go:298] Setting JSON to false
	I0408 12:36:40.872831  427857 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8344,"bootTime":1712571457,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:36:40.872943  427857 start.go:139] virtualization: kvm guest
	I0408 12:36:40.875595  427857 out.go:177] * [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:36:40.878285  427857 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:36:40.878201  427857 notify.go:220] Checking for updates...
	I0408 12:36:40.880282  427857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:36:40.882189  427857 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:36:40.884124  427857 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:36:40.885791  427857 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:36:40.887623  427857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:36:40.890109  427857 config.go:182] Loaded profile config "cert-expiration-283523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:36:40.890257  427857 config.go:182] Loaded profile config "enable-default-cni-583253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:36:40.890379  427857 config.go:182] Loaded profile config "flannel-583253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:36:40.890555  427857 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:36:40.932704  427857 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 12:36:40.934489  427857 start.go:297] selected driver: kvm2
	I0408 12:36:40.934512  427857 start.go:901] validating driver "kvm2" against <nil>
	I0408 12:36:40.934547  427857 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:36:40.935368  427857 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:36:40.935476  427857 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:36:40.952974  427857 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:36:40.953041  427857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 12:36:40.953262  427857 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:36:40.953334  427857 cni.go:84] Creating CNI manager for ""
	I0408 12:36:40.953355  427857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:36:40.953362  427857 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 12:36:40.953410  427857 start.go:340] cluster config:
	{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:36:40.953515  427857 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:36:40.956029  427857 out.go:177] * Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	I0408 12:36:40.957885  427857 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:36:40.957969  427857 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:36:40.957980  427857 cache.go:56] Caching tarball of preloaded images
	I0408 12:36:40.958123  427857 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:36:40.958141  427857 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:36:40.958298  427857 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:36:40.958330  427857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json: {Name:mk19cded6c80769b6f537493d0df29d1fecf8eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:36:40.958533  427857 start.go:360] acquireMachinesLock for old-k8s-version-384148: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:36:40.958582  427857 start.go:364] duration metric: took 21.202µs to acquireMachinesLock for "old-k8s-version-384148"
	I0408 12:36:40.958603  427857 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:36:40.958713  427857 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 12:36:40.962071  427857 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 12:36:40.962298  427857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:36:40.962358  427857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:36:40.978405  427857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I0408 12:36:40.979030  427857 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:36:40.979854  427857 main.go:141] libmachine: Using API Version  1
	I0408 12:36:40.979890  427857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:36:40.980373  427857 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:36:40.980684  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:36:40.980898  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:36:40.981189  427857 start.go:159] libmachine.API.Create for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:36:40.981237  427857 client.go:168] LocalClient.Create starting
	I0408 12:36:40.981285  427857 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 12:36:40.981338  427857 main.go:141] libmachine: Decoding PEM data...
	I0408 12:36:40.981361  427857 main.go:141] libmachine: Parsing certificate...
	I0408 12:36:40.981469  427857 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 12:36:40.981506  427857 main.go:141] libmachine: Decoding PEM data...
	I0408 12:36:40.981524  427857 main.go:141] libmachine: Parsing certificate...
	I0408 12:36:40.981551  427857 main.go:141] libmachine: Running pre-create checks...
	I0408 12:36:40.981567  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .PreCreateCheck
	I0408 12:36:40.981993  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:36:40.982484  427857 main.go:141] libmachine: Creating machine...
	I0408 12:36:40.982499  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .Create
	I0408 12:36:40.982687  427857 main.go:141] libmachine: (old-k8s-version-384148) Creating KVM machine...
	I0408 12:36:40.984323  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found existing default KVM network
	I0408 12:36:40.986053  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:40.985867  427880 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f7f0}
	I0408 12:36:40.986103  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | created network xml: 
	I0408 12:36:40.986120  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | <network>
	I0408 12:36:40.986139  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |   <name>mk-old-k8s-version-384148</name>
	I0408 12:36:40.986151  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |   <dns enable='no'/>
	I0408 12:36:40.986163  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |   
	I0408 12:36:40.986177  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 12:36:40.986209  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |     <dhcp>
	I0408 12:36:40.986225  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 12:36:40.986234  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |     </dhcp>
	I0408 12:36:40.986247  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |   </ip>
	I0408 12:36:40.986260  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG |   
	I0408 12:36:40.986274  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | </network>
	I0408 12:36:40.986285  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | 
	I0408 12:36:40.992225  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | trying to create private KVM network mk-old-k8s-version-384148 192.168.39.0/24...
	I0408 12:36:41.076439  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | private KVM network mk-old-k8s-version-384148 192.168.39.0/24 created
	I0408 12:36:41.076487  427857 main.go:141] libmachine: (old-k8s-version-384148) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148 ...
	I0408 12:36:41.076544  427857 main.go:141] libmachine: (old-k8s-version-384148) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 12:36:41.076567  427857 main.go:141] libmachine: (old-k8s-version-384148) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 12:36:41.076583  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:41.076399  427880 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:36:41.371718  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:41.371505  427880 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa...
	I0408 12:36:41.640077  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:41.639947  427880 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/old-k8s-version-384148.rawdisk...
	I0408 12:36:41.640114  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Writing magic tar header
	I0408 12:36:41.640131  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Writing SSH key tar header
	I0408 12:36:41.640144  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:41.640103  427880 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148 ...
	I0408 12:36:41.640253  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148
	I0408 12:36:41.640310  427857 main.go:141] libmachine: (old-k8s-version-384148) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148 (perms=drwx------)
	I0408 12:36:41.640336  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 12:36:41.640352  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:36:41.640360  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 12:36:41.640371  427857 main.go:141] libmachine: (old-k8s-version-384148) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 12:36:41.640381  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 12:36:41.640391  427857 main.go:141] libmachine: (old-k8s-version-384148) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 12:36:41.640404  427857 main.go:141] libmachine: (old-k8s-version-384148) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 12:36:41.640421  427857 main.go:141] libmachine: (old-k8s-version-384148) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 12:36:41.640432  427857 main.go:141] libmachine: (old-k8s-version-384148) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 12:36:41.640438  427857 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:36:41.640449  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Checking permissions on dir: /home/jenkins
	I0408 12:36:41.640457  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Checking permissions on dir: /home
	I0408 12:36:41.640488  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Skipping /home - not owner
	I0408 12:36:41.641814  427857 main.go:141] libmachine: (old-k8s-version-384148) define libvirt domain using xml: 
	I0408 12:36:41.641838  427857 main.go:141] libmachine: (old-k8s-version-384148) <domain type='kvm'>
	I0408 12:36:41.641848  427857 main.go:141] libmachine: (old-k8s-version-384148)   <name>old-k8s-version-384148</name>
	I0408 12:36:41.641861  427857 main.go:141] libmachine: (old-k8s-version-384148)   <memory unit='MiB'>2200</memory>
	I0408 12:36:41.641870  427857 main.go:141] libmachine: (old-k8s-version-384148)   <vcpu>2</vcpu>
	I0408 12:36:41.641882  427857 main.go:141] libmachine: (old-k8s-version-384148)   <features>
	I0408 12:36:41.641905  427857 main.go:141] libmachine: (old-k8s-version-384148)     <acpi/>
	I0408 12:36:41.641926  427857 main.go:141] libmachine: (old-k8s-version-384148)     <apic/>
	I0408 12:36:41.641945  427857 main.go:141] libmachine: (old-k8s-version-384148)     <pae/>
	I0408 12:36:41.641963  427857 main.go:141] libmachine: (old-k8s-version-384148)     
	I0408 12:36:41.641972  427857 main.go:141] libmachine: (old-k8s-version-384148)   </features>
	I0408 12:36:41.641980  427857 main.go:141] libmachine: (old-k8s-version-384148)   <cpu mode='host-passthrough'>
	I0408 12:36:41.641989  427857 main.go:141] libmachine: (old-k8s-version-384148)   
	I0408 12:36:41.641995  427857 main.go:141] libmachine: (old-k8s-version-384148)   </cpu>
	I0408 12:36:41.642004  427857 main.go:141] libmachine: (old-k8s-version-384148)   <os>
	I0408 12:36:41.642016  427857 main.go:141] libmachine: (old-k8s-version-384148)     <type>hvm</type>
	I0408 12:36:41.642048  427857 main.go:141] libmachine: (old-k8s-version-384148)     <boot dev='cdrom'/>
	I0408 12:36:41.642077  427857 main.go:141] libmachine: (old-k8s-version-384148)     <boot dev='hd'/>
	I0408 12:36:41.642102  427857 main.go:141] libmachine: (old-k8s-version-384148)     <bootmenu enable='no'/>
	I0408 12:36:41.642123  427857 main.go:141] libmachine: (old-k8s-version-384148)   </os>
	I0408 12:36:41.642139  427857 main.go:141] libmachine: (old-k8s-version-384148)   <devices>
	I0408 12:36:41.642155  427857 main.go:141] libmachine: (old-k8s-version-384148)     <disk type='file' device='cdrom'>
	I0408 12:36:41.642173  427857 main.go:141] libmachine: (old-k8s-version-384148)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/boot2docker.iso'/>
	I0408 12:36:41.642185  427857 main.go:141] libmachine: (old-k8s-version-384148)       <target dev='hdc' bus='scsi'/>
	I0408 12:36:41.642214  427857 main.go:141] libmachine: (old-k8s-version-384148)       <readonly/>
	I0408 12:36:41.642236  427857 main.go:141] libmachine: (old-k8s-version-384148)     </disk>
	I0408 12:36:41.642246  427857 main.go:141] libmachine: (old-k8s-version-384148)     <disk type='file' device='disk'>
	I0408 12:36:41.642256  427857 main.go:141] libmachine: (old-k8s-version-384148)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 12:36:41.642275  427857 main.go:141] libmachine: (old-k8s-version-384148)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/old-k8s-version-384148.rawdisk'/>
	I0408 12:36:41.642293  427857 main.go:141] libmachine: (old-k8s-version-384148)       <target dev='hda' bus='virtio'/>
	I0408 12:36:41.642318  427857 main.go:141] libmachine: (old-k8s-version-384148)     </disk>
	I0408 12:36:41.642347  427857 main.go:141] libmachine: (old-k8s-version-384148)     <interface type='network'>
	I0408 12:36:41.642358  427857 main.go:141] libmachine: (old-k8s-version-384148)       <source network='mk-old-k8s-version-384148'/>
	I0408 12:36:41.642370  427857 main.go:141] libmachine: (old-k8s-version-384148)       <model type='virtio'/>
	I0408 12:36:41.642378  427857 main.go:141] libmachine: (old-k8s-version-384148)     </interface>
	I0408 12:36:41.642389  427857 main.go:141] libmachine: (old-k8s-version-384148)     <interface type='network'>
	I0408 12:36:41.642401  427857 main.go:141] libmachine: (old-k8s-version-384148)       <source network='default'/>
	I0408 12:36:41.642412  427857 main.go:141] libmachine: (old-k8s-version-384148)       <model type='virtio'/>
	I0408 12:36:41.642423  427857 main.go:141] libmachine: (old-k8s-version-384148)     </interface>
	I0408 12:36:41.642431  427857 main.go:141] libmachine: (old-k8s-version-384148)     <serial type='pty'>
	I0408 12:36:41.642442  427857 main.go:141] libmachine: (old-k8s-version-384148)       <target port='0'/>
	I0408 12:36:41.642453  427857 main.go:141] libmachine: (old-k8s-version-384148)     </serial>
	I0408 12:36:41.642471  427857 main.go:141] libmachine: (old-k8s-version-384148)     <console type='pty'>
	I0408 12:36:41.642483  427857 main.go:141] libmachine: (old-k8s-version-384148)       <target type='serial' port='0'/>
	I0408 12:36:41.642492  427857 main.go:141] libmachine: (old-k8s-version-384148)     </console>
	I0408 12:36:41.642502  427857 main.go:141] libmachine: (old-k8s-version-384148)     <rng model='virtio'>
	I0408 12:36:41.642516  427857 main.go:141] libmachine: (old-k8s-version-384148)       <backend model='random'>/dev/random</backend>
	I0408 12:36:41.642526  427857 main.go:141] libmachine: (old-k8s-version-384148)     </rng>
	I0408 12:36:41.642534  427857 main.go:141] libmachine: (old-k8s-version-384148)     
	I0408 12:36:41.642544  427857 main.go:141] libmachine: (old-k8s-version-384148)     
	I0408 12:36:41.642552  427857 main.go:141] libmachine: (old-k8s-version-384148)   </devices>
	I0408 12:36:41.642561  427857 main.go:141] libmachine: (old-k8s-version-384148) </domain>
	I0408 12:36:41.642572  427857 main.go:141] libmachine: (old-k8s-version-384148) 
	I0408 12:36:41.647236  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:02:8e:a4 in network default
	I0408 12:36:41.647870  427857 main.go:141] libmachine: (old-k8s-version-384148) Ensuring networks are active...
	I0408 12:36:41.647903  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:41.648594  427857 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network default is active
	I0408 12:36:41.649014  427857 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network mk-old-k8s-version-384148 is active
	I0408 12:36:41.649674  427857 main.go:141] libmachine: (old-k8s-version-384148) Getting domain xml...
	I0408 12:36:41.650890  427857 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:36:43.078358  427857 main.go:141] libmachine: (old-k8s-version-384148) Waiting to get IP...
	I0408 12:36:43.079477  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:43.080104  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:43.080132  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:43.080059  427880 retry.go:31] will retry after 213.349334ms: waiting for machine to come up
	I0408 12:36:43.295813  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:43.296478  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:43.296508  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:43.296416  427880 retry.go:31] will retry after 268.560644ms: waiting for machine to come up
	I0408 12:36:43.567297  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:43.568014  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:43.568038  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:43.567970  427880 retry.go:31] will retry after 370.668467ms: waiting for machine to come up
	I0408 12:36:43.940742  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:43.941462  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:43.941497  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:43.941403  427880 retry.go:31] will retry after 566.797332ms: waiting for machine to come up
	I0408 12:36:44.510068  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:44.510696  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:44.510730  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:44.510614  427880 retry.go:31] will retry after 732.470397ms: waiting for machine to come up
	I0408 12:36:45.245329  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:45.246106  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:45.246134  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:45.245998  427880 retry.go:31] will retry after 811.885095ms: waiting for machine to come up
	I0408 12:36:46.059585  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:46.060413  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:46.060464  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:46.060361  427880 retry.go:31] will retry after 940.045894ms: waiting for machine to come up
	I0408 12:36:47.001698  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:47.002308  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:47.002339  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:47.002241  427880 retry.go:31] will retry after 1.410862329s: waiting for machine to come up
	I0408 12:36:48.414505  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:48.415114  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:48.415147  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:48.415055  427880 retry.go:31] will retry after 1.750440519s: waiting for machine to come up
	I0408 12:36:50.167101  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:50.167701  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:50.167729  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:50.167633  427880 retry.go:31] will retry after 1.574388234s: waiting for machine to come up
	I0408 12:36:51.743563  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:51.744185  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:51.744218  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:51.744093  427880 retry.go:31] will retry after 2.066042502s: waiting for machine to come up
	I0408 12:36:53.811994  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:53.812698  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:53.812729  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:53.812612  427880 retry.go:31] will retry after 3.610461988s: waiting for machine to come up
	I0408 12:36:57.424954  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:36:57.425551  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:36:57.425579  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:36:57.425494  427880 retry.go:31] will retry after 3.073515603s: waiting for machine to come up
	I0408 12:37:00.501090  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:00.501698  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:37:00.501748  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:37:00.501641  427880 retry.go:31] will retry after 5.461813527s: waiting for machine to come up
	I0408 12:37:05.968982  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:05.969672  427857 main.go:141] libmachine: (old-k8s-version-384148) Found IP for machine: 192.168.39.245
	I0408 12:37:05.969697  427857 main.go:141] libmachine: (old-k8s-version-384148) Reserving static IP address...
	I0408 12:37:05.969707  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has current primary IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:05.970090  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"} in network mk-old-k8s-version-384148
	I0408 12:37:06.050095  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Getting to WaitForSSH function...
	I0408 12:37:06.050135  427857 main.go:141] libmachine: (old-k8s-version-384148) Reserved static IP address: 192.168.39.245
	I0408 12:37:06.050151  427857 main.go:141] libmachine: (old-k8s-version-384148) Waiting for SSH to be available...
	I0408 12:37:06.052797  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.053179  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:da:95}
	I0408 12:37:06.053210  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.053363  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH client type: external
	I0408 12:37:06.053396  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa (-rw-------)
	I0408 12:37:06.053448  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:37:06.053474  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | About to run SSH command:
	I0408 12:37:06.053492  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | exit 0
	I0408 12:37:06.184419  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | SSH cmd err, output: <nil>: 
	I0408 12:37:06.184744  427857 main.go:141] libmachine: (old-k8s-version-384148) KVM machine creation complete!
	I0408 12:37:06.185053  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:37:06.185727  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:37:06.185981  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:37:06.186271  427857 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0408 12:37:06.186291  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetState
	I0408 12:37:06.187955  427857 main.go:141] libmachine: Detecting operating system of created instance...
	I0408 12:37:06.187970  427857 main.go:141] libmachine: Waiting for SSH to be available...
	I0408 12:37:06.187976  427857 main.go:141] libmachine: Getting to WaitForSSH function...
	I0408 12:37:06.187984  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:06.191839  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.192236  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:06.192284  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.192445  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:06.192665  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.192826  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.193069  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:06.193314  427857 main.go:141] libmachine: Using SSH client type: native
	I0408 12:37:06.193608  427857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:37:06.193624  427857 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0408 12:37:06.291256  427857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:37:06.291303  427857 main.go:141] libmachine: Detecting the provisioner...
	I0408 12:37:06.291314  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:06.294244  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.294634  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:06.294668  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.294853  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:06.295053  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.295280  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.295456  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:06.295740  427857 main.go:141] libmachine: Using SSH client type: native
	I0408 12:37:06.296005  427857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:37:06.296022  427857 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0408 12:37:06.400836  427857 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0408 12:37:06.400921  427857 main.go:141] libmachine: found compatible host: buildroot
	I0408 12:37:06.400931  427857 main.go:141] libmachine: Provisioning with buildroot...
	I0408 12:37:06.400941  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:37:06.401255  427857 buildroot.go:166] provisioning hostname "old-k8s-version-384148"
	I0408 12:37:06.401286  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:37:06.401477  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:06.404242  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.404589  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:06.404611  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.404780  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:06.404992  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.405169  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.405308  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:06.405501  427857 main.go:141] libmachine: Using SSH client type: native
	I0408 12:37:06.405728  427857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:37:06.405746  427857 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-384148 && echo "old-k8s-version-384148" | sudo tee /etc/hostname
	I0408 12:37:06.519168  427857 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-384148
	
	I0408 12:37:06.519209  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:06.522455  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.523009  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:06.523043  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.523273  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:06.523453  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.523649  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.523839  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:06.524051  427857 main.go:141] libmachine: Using SSH client type: native
	I0408 12:37:06.524307  427857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:37:06.524334  427857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-384148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-384148/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-384148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:37:06.634132  427857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:37:06.634175  427857 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:37:06.634204  427857 buildroot.go:174] setting up certificates
	I0408 12:37:06.634223  427857 provision.go:84] configureAuth start
	I0408 12:37:06.634243  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:37:06.634579  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:37:06.637781  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.638174  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:06.638214  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.638343  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:06.640612  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.641037  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:06.641075  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.641262  427857 provision.go:143] copyHostCerts
	I0408 12:37:06.641344  427857 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:37:06.641380  427857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:37:06.641467  427857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:37:06.641595  427857 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:37:06.641608  427857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:37:06.641644  427857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:37:06.641838  427857 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:37:06.641864  427857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:37:06.641931  427857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:37:06.642027  427857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-384148 san=[127.0.0.1 192.168.39.245 localhost minikube old-k8s-version-384148]
	I0408 12:37:06.962159  427857 provision.go:177] copyRemoteCerts
	I0408 12:37:06.962227  427857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:37:06.962282  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:06.965196  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.965499  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:06.965526  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:06.965701  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:06.965985  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:06.966142  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:06.966275  427857 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:37:07.046605  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:37:07.073461  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 12:37:07.102841  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:37:07.130954  427857 provision.go:87] duration metric: took 496.70594ms to configureAuth
	I0408 12:37:07.130994  427857 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:37:07.131176  427857 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:37:07.131260  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:07.134020  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.134341  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:07.134383  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.134579  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:07.134816  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:07.134986  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:07.135158  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:07.135308  427857 main.go:141] libmachine: Using SSH client type: native
	I0408 12:37:07.135521  427857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:37:07.135549  427857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:37:07.425784  427857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:37:07.425819  427857 main.go:141] libmachine: Checking connection to Docker...
	I0408 12:37:07.425830  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetURL
	I0408 12:37:07.427206  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using libvirt version 6000000
	I0408 12:37:07.429616  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.429966  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:07.429995  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.430223  427857 main.go:141] libmachine: Docker is up and running!
	I0408 12:37:07.430237  427857 main.go:141] libmachine: Reticulating splines...
	I0408 12:37:07.430245  427857 client.go:171] duration metric: took 26.448996743s to LocalClient.Create
	I0408 12:37:07.430272  427857 start.go:167] duration metric: took 26.449083043s to libmachine.API.Create "old-k8s-version-384148"
	I0408 12:37:07.430285  427857 start.go:293] postStartSetup for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:37:07.430313  427857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:37:07.430333  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:37:07.430649  427857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:37:07.430679  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:07.433586  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.434016  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:07.434048  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.434221  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:07.434453  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:07.434646  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:07.434760  427857 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:37:07.522920  427857 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:37:07.527774  427857 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:37:07.527803  427857 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:37:07.527859  427857 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:37:07.527943  427857 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:37:07.528047  427857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:37:07.538199  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:37:07.564982  427857 start.go:296] duration metric: took 134.67889ms for postStartSetup
	I0408 12:37:07.565051  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:37:07.565757  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:37:07.568926  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.569330  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:07.569359  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.569643  427857 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:37:07.569909  427857 start.go:128] duration metric: took 26.611180267s to createHost
	I0408 12:37:07.569955  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:07.572306  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.572644  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:07.572666  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.572853  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:07.573072  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:07.573290  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:07.573433  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:07.573558  427857 main.go:141] libmachine: Using SSH client type: native
	I0408 12:37:07.573729  427857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:37:07.573753  427857 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 12:37:07.676909  427857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712579827.661479819
	
	I0408 12:37:07.676938  427857 fix.go:216] guest clock: 1712579827.661479819
	I0408 12:37:07.676948  427857 fix.go:229] Guest: 2024-04-08 12:37:07.661479819 +0000 UTC Remote: 2024-04-08 12:37:07.569936836 +0000 UTC m=+26.756682625 (delta=91.542983ms)
	I0408 12:37:07.676991  427857 fix.go:200] guest clock delta is within tolerance: 91.542983ms
	I0408 12:37:07.676999  427857 start.go:83] releasing machines lock for "old-k8s-version-384148", held for 26.71840661s
	I0408 12:37:07.677034  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:37:07.677359  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:37:07.680284  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.680565  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:07.680596  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.680818  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:37:07.684125  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:37:07.684351  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:37:07.684442  427857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:37:07.684515  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:07.684591  427857 ssh_runner.go:195] Run: cat /version.json
	I0408 12:37:07.684617  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:37:07.687215  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.687279  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.687643  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:07.687674  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.687721  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:07.687764  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:07.687871  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:07.688070  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:07.688106  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:37:07.688284  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:37:07.688292  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:07.688486  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:37:07.688482  427857 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:37:07.688627  427857 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:37:07.806264  427857 ssh_runner.go:195] Run: systemctl --version
	I0408 12:37:07.815797  427857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:37:07.985625  427857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:37:07.992407  427857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:37:07.992475  427857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:37:08.019674  427857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:37:08.019715  427857 start.go:494] detecting cgroup driver to use...
	I0408 12:37:08.019808  427857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:37:08.047212  427857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:37:08.065153  427857 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:37:08.065230  427857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:37:08.084092  427857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:37:08.101065  427857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:37:08.242982  427857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:37:08.396323  427857 docker.go:233] disabling docker service ...
	I0408 12:37:08.396386  427857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:37:08.417263  427857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:37:08.432215  427857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:37:08.583200  427857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:37:08.753546  427857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:37:08.770006  427857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:37:08.793679  427857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:37:08.793755  427857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:37:08.806953  427857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:37:08.807047  427857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:37:08.821115  427857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:37:08.837375  427857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:37:08.853929  427857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:37:08.869693  427857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:37:08.883029  427857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:37:08.883109  427857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:37:08.904682  427857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:37:08.916518  427857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:37:09.075999  427857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:37:09.257481  427857 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:37:09.257563  427857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:37:09.264128  427857 start.go:562] Will wait 60s for crictl version
	I0408 12:37:09.264205  427857 ssh_runner.go:195] Run: which crictl
	I0408 12:37:09.270084  427857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:37:09.314688  427857 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:37:09.314784  427857 ssh_runner.go:195] Run: crio --version
	I0408 12:37:09.348727  427857 ssh_runner.go:195] Run: crio --version
	I0408 12:37:09.389330  427857 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:37:09.390734  427857 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:37:09.394401  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:09.394941  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:36:57 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:37:09.394998  427857 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:37:09.395239  427857 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:37:09.400150  427857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:37:09.416497  427857 kubeadm.go:877] updating cluster {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:37:09.416644  427857 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:37:09.416717  427857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:37:09.464939  427857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:37:09.465025  427857 ssh_runner.go:195] Run: which lz4
	I0408 12:37:09.470089  427857 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 12:37:09.477942  427857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:37:09.478003  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:37:11.475839  427857 crio.go:462] duration metric: took 2.005762609s to copy over tarball
	I0408 12:37:11.475972  427857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:37:14.514832  427857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.038822794s)
	I0408 12:37:14.514873  427857 crio.go:469] duration metric: took 3.03899703s to extract the tarball
	I0408 12:37:14.514886  427857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:37:14.562209  427857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:37:14.620287  427857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:37:14.620315  427857 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:37:14.620404  427857 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:37:14.620423  427857 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:37:14.620436  427857 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:37:14.620474  427857 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:37:14.620544  427857 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:37:14.620482  427857 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:37:14.620438  427857 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:37:14.620933  427857 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:37:14.621890  427857 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:37:14.621920  427857 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:37:14.622197  427857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:37:14.622198  427857 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:37:14.622207  427857 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:37:14.622279  427857 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:37:14.622384  427857 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:37:14.622477  427857 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:37:14.845508  427857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:37:14.855989  427857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:37:14.857338  427857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:37:14.862269  427857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:37:14.863958  427857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:37:14.873446  427857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:37:14.879222  427857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:37:14.961775  427857 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:37:14.961842  427857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:37:14.961903  427857 ssh_runner.go:195] Run: which crictl
	I0408 12:37:14.963572  427857 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:37:14.963622  427857 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:37:14.963670  427857 ssh_runner.go:195] Run: which crictl
	I0408 12:37:15.040035  427857 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:37:15.040086  427857 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:37:15.040131  427857 ssh_runner.go:195] Run: which crictl
	I0408 12:37:15.056709  427857 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:37:15.056769  427857 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:37:15.056832  427857 ssh_runner.go:195] Run: which crictl
	I0408 12:37:15.076748  427857 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:37:15.076797  427857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:37:15.076802  427857 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:37:15.076834  427857 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:37:15.076848  427857 ssh_runner.go:195] Run: which crictl
	I0408 12:37:15.076857  427857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:37:15.076834  427857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:37:15.076899  427857 ssh_runner.go:195] Run: which crictl
	I0408 12:37:15.076910  427857 ssh_runner.go:195] Run: which crictl
	I0408 12:37:15.076911  427857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:37:15.076940  427857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:37:15.076982  427857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:37:15.077007  427857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:37:15.081528  427857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:37:15.164784  427857 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:37:15.164859  427857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:37:15.209138  427857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:37:15.209193  427857 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:37:15.209219  427857 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:37:15.209293  427857 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:37:15.209295  427857 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:37:15.230321  427857 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:37:15.255579  427857 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:37:15.504235  427857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:37:15.654550  427857 cache_images.go:92] duration metric: took 1.034214764s to LoadCachedImages
	W0408 12:37:15.654659  427857 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0408 12:37:15.654680  427857 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.20.0 crio true true} ...
	I0408 12:37:15.654840  427857 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-384148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:37:15.654953  427857 ssh_runner.go:195] Run: crio config
	I0408 12:37:15.710377  427857 cni.go:84] Creating CNI manager for ""
	I0408 12:37:15.710406  427857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:37:15.710421  427857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:37:15.710440  427857 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-384148 NodeName:old-k8s-version-384148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:37:15.710597  427857 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-384148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:37:15.710700  427857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:37:15.722565  427857 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:37:15.722654  427857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:37:15.734077  427857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 12:37:15.753816  427857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:37:15.776837  427857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:37:15.796664  427857 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0408 12:37:15.801300  427857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:37:15.815331  427857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:37:15.959779  427857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:37:16.047536  427857 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148 for IP: 192.168.39.245
	I0408 12:37:16.047567  427857 certs.go:194] generating shared ca certs ...
	I0408 12:37:16.047589  427857 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:37:16.047893  427857 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:37:16.047989  427857 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:37:16.048007  427857 certs.go:256] generating profile certs ...
	I0408 12:37:16.048100  427857 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key
	I0408 12:37:16.048129  427857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.crt with IP's: []
	I0408 12:37:16.597439  427857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.crt ...
	I0408 12:37:16.597479  427857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.crt: {Name:mkbd9ea82a6cd666f29931f10aa38dd0f6cd46ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:37:16.597675  427857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key ...
	I0408 12:37:16.597694  427857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key: {Name:mkbf40c6f4a2d1dd392be5aacb14f587861a0057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:37:16.597801  427857 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1
	I0408 12:37:16.597824  427857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt.b153d6a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.245]
	I0408 12:37:16.732869  427857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt.b153d6a1 ...
	I0408 12:37:16.732900  427857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt.b153d6a1: {Name:mk0cc72a128492f6574332a7f9b12c9d2fd13036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:37:16.733087  427857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1 ...
	I0408 12:37:16.733105  427857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1: {Name:mk3352b132cc32fb8914e346a0480aac8fea5099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:37:16.733207  427857 certs.go:381] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt.b153d6a1 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt
	I0408 12:37:16.733331  427857 certs.go:385] copying /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1 -> /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key
	I0408 12:37:16.733417  427857 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key
	I0408 12:37:16.733440  427857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt with IP's: []
	I0408 12:37:16.855983  427857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt ...
	I0408 12:37:16.856023  427857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt: {Name:mk6f9a4afa9e885b2e6ada5e55785cba45ffa0ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:37:16.856214  427857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key ...
	I0408 12:37:16.856232  427857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key: {Name:mk11808c2213b7d7d1bf671e4654182ee87a2d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:37:16.856455  427857 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:37:16.856512  427857 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:37:16.856533  427857 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:37:16.856568  427857 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:37:16.856602  427857 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:37:16.856635  427857 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:37:16.856698  427857 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:37:16.857345  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:37:16.892207  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:37:16.928004  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:37:16.964797  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:37:17.004861  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 12:37:17.036124  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:37:17.063228  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:37:17.098776  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:37:17.133349  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:37:17.167704  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:37:17.202324  427857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:37:17.234653  427857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:37:17.258813  427857 ssh_runner.go:195] Run: openssl version
	I0408 12:37:17.266813  427857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:37:17.281123  427857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:37:17.286503  427857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:37:17.286598  427857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:37:17.293160  427857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:37:17.309484  427857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:37:17.324827  427857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:37:17.332278  427857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:37:17.332353  427857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:37:17.340768  427857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:37:17.357450  427857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:37:17.373556  427857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:37:17.380620  427857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:37:17.380701  427857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:37:17.388916  427857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:37:17.403901  427857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:37:17.409743  427857 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 12:37:17.409806  427857 kubeadm.go:391] StartCluster: {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:37:17.409923  427857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:37:17.410090  427857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:37:17.455602  427857 cri.go:89] found id: ""
	I0408 12:37:17.455721  427857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 12:37:17.469952  427857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:37:17.484433  427857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:37:17.495741  427857 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:37:17.495770  427857 kubeadm.go:156] found existing configuration files:
	
	I0408 12:37:17.495830  427857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:37:17.508488  427857 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:37:17.508586  427857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:37:17.519716  427857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:37:17.529471  427857 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:37:17.529556  427857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:37:17.539716  427857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:37:17.549755  427857 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:37:17.549836  427857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:37:17.563404  427857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:37:17.574746  427857 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:37:17.574822  427857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:37:17.586301  427857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:37:17.789342  427857 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:37:17.789428  427857 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:37:17.974672  427857 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:37:17.974825  427857 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:37:17.974988  427857 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:37:18.199712  427857 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:37:18.201803  427857 out.go:204]   - Generating certificates and keys ...
	I0408 12:37:18.201922  427857 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:37:18.202007  427857 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:37:18.741593  427857 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 12:37:18.915142  427857 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0408 12:37:18.981888  427857 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0408 12:37:19.092993  427857 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0408 12:37:19.393290  427857 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0408 12:37:19.398333  427857 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-384148] and IPs [192.168.39.245 127.0.0.1 ::1]
	I0408 12:37:19.518743  427857 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0408 12:37:19.519273  427857 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-384148] and IPs [192.168.39.245 127.0.0.1 ::1]
	I0408 12:37:19.731332  427857 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 12:37:19.828468  427857 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 12:37:19.918967  427857 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0408 12:37:19.919182  427857 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:37:20.050224  427857 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:37:20.304337  427857 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:37:20.536350  427857 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:37:20.648576  427857 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:37:20.673858  427857 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:37:20.675143  427857 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:37:20.675214  427857 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:37:20.889069  427857 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:37:20.890979  427857 out.go:204]   - Booting up control plane ...
	I0408 12:37:20.891130  427857 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:37:20.912279  427857 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:37:20.914422  427857 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:37:20.916017  427857 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:37:20.924317  427857 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:38:00.922955  427857 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:38:00.923096  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:38:00.923360  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:38:05.923958  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:38:05.924135  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:38:15.925162  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:38:15.925365  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:38:35.926748  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:38:35.927017  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:39:15.926789  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:39:15.927097  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:39:15.927141  427857 kubeadm.go:309] 
	I0408 12:39:15.927218  427857 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:39:15.927290  427857 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:39:15.927309  427857 kubeadm.go:309] 
	I0408 12:39:15.927344  427857 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:39:15.927388  427857 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:39:15.927535  427857 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:39:15.927548  427857 kubeadm.go:309] 
	I0408 12:39:15.927786  427857 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:39:15.927847  427857 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:39:15.927905  427857 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:39:15.927922  427857 kubeadm.go:309] 
	I0408 12:39:15.928083  427857 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:39:15.928229  427857 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:39:15.928243  427857 kubeadm.go:309] 
	I0408 12:39:15.928389  427857 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:39:15.928522  427857 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:39:15.928638  427857 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:39:15.928753  427857 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:39:15.928777  427857 kubeadm.go:309] 
	I0408 12:39:15.929636  427857 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:39:15.929763  427857 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:39:15.929927  427857 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0408 12:39:15.930054  427857 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-384148] and IPs [192.168.39.245 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-384148] and IPs [192.168.39.245 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-384148] and IPs [192.168.39.245 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-384148] and IPs [192.168.39.245 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:39:15.930119  427857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:39:18.565693  427857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.635530602s)
	I0408 12:39:18.565777  427857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:39:18.584365  427857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:39:18.595805  427857 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:39:18.595828  427857 kubeadm.go:156] found existing configuration files:
	
	I0408 12:39:18.595898  427857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:39:18.606868  427857 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:39:18.606948  427857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:39:18.619153  427857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:39:18.630563  427857 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:39:18.630642  427857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:39:18.642131  427857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:39:18.654092  427857 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:39:18.654175  427857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:39:18.666958  427857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:39:18.678002  427857 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:39:18.678062  427857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:39:18.689377  427857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:39:18.947515  427857 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:41:15.244643  427857 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:41:15.244753  427857 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:41:15.246505  427857 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:41:15.246564  427857 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:41:15.246660  427857 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:41:15.246779  427857 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:41:15.247005  427857 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:41:15.247075  427857 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:41:15.248982  427857 out.go:204]   - Generating certificates and keys ...
	I0408 12:41:15.249063  427857 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:41:15.249154  427857 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:41:15.249252  427857 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:41:15.249336  427857 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:41:15.249426  427857 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:41:15.249510  427857 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:41:15.249578  427857 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:41:15.249638  427857 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:41:15.249709  427857 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:41:15.249809  427857 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:41:15.249856  427857 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:41:15.249908  427857 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:41:15.249973  427857 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:41:15.250030  427857 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:41:15.250083  427857 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:41:15.250153  427857 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:41:15.250258  427857 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:41:15.250348  427857 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:41:15.250414  427857 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:41:15.250515  427857 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:41:15.251950  427857 out.go:204]   - Booting up control plane ...
	I0408 12:41:15.252074  427857 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:41:15.252161  427857 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:41:15.252217  427857 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:41:15.252288  427857 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:41:15.252454  427857 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:41:15.252555  427857 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:41:15.252648  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:41:15.252870  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:41:15.252965  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:41:15.253206  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:41:15.253296  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:41:15.253499  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:41:15.253591  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:41:15.253788  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:41:15.253898  427857 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:41:15.254052  427857 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:41:15.254061  427857 kubeadm.go:309] 
	I0408 12:41:15.254107  427857 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:41:15.254142  427857 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:41:15.254153  427857 kubeadm.go:309] 
	I0408 12:41:15.254188  427857 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:41:15.254218  427857 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:41:15.254345  427857 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:41:15.254354  427857 kubeadm.go:309] 
	I0408 12:41:15.254443  427857 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:41:15.254473  427857 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:41:15.254503  427857 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:41:15.254509  427857 kubeadm.go:309] 
	I0408 12:41:15.254622  427857 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:41:15.254759  427857 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:41:15.254776  427857 kubeadm.go:309] 
	I0408 12:41:15.254907  427857 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:41:15.254988  427857 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:41:15.255068  427857 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:41:15.255167  427857 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:41:15.255207  427857 kubeadm.go:309] 
	I0408 12:41:15.255292  427857 kubeadm.go:393] duration metric: took 3m57.845493267s to StartCluster
	I0408 12:41:15.255383  427857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:41:15.255466  427857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:41:15.303417  427857 cri.go:89] found id: ""
	I0408 12:41:15.303460  427857 logs.go:276] 0 containers: []
	W0408 12:41:15.303471  427857 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:41:15.303480  427857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:41:15.303549  427857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:41:15.342101  427857 cri.go:89] found id: ""
	I0408 12:41:15.342136  427857 logs.go:276] 0 containers: []
	W0408 12:41:15.342147  427857 logs.go:278] No container was found matching "etcd"
	I0408 12:41:15.342156  427857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:41:15.342230  427857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:41:15.381005  427857 cri.go:89] found id: ""
	I0408 12:41:15.381037  427857 logs.go:276] 0 containers: []
	W0408 12:41:15.381046  427857 logs.go:278] No container was found matching "coredns"
	I0408 12:41:15.381053  427857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:41:15.381132  427857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:41:15.422117  427857 cri.go:89] found id: ""
	I0408 12:41:15.422145  427857 logs.go:276] 0 containers: []
	W0408 12:41:15.422154  427857 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:41:15.422160  427857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:41:15.422221  427857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:41:15.460054  427857 cri.go:89] found id: ""
	I0408 12:41:15.460092  427857 logs.go:276] 0 containers: []
	W0408 12:41:15.460104  427857 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:41:15.460112  427857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:41:15.460181  427857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:41:15.497884  427857 cri.go:89] found id: ""
	I0408 12:41:15.497911  427857 logs.go:276] 0 containers: []
	W0408 12:41:15.497920  427857 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:41:15.497927  427857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:41:15.498016  427857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:41:15.536837  427857 cri.go:89] found id: ""
	I0408 12:41:15.536880  427857 logs.go:276] 0 containers: []
	W0408 12:41:15.536901  427857 logs.go:278] No container was found matching "kindnet"
	I0408 12:41:15.536914  427857 logs.go:123] Gathering logs for dmesg ...
	I0408 12:41:15.536933  427857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:41:15.550650  427857 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:41:15.550688  427857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:41:15.669989  427857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:41:15.670019  427857 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:41:15.670037  427857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:41:15.763806  427857 logs.go:123] Gathering logs for container status ...
	I0408 12:41:15.763849  427857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:41:15.807522  427857 logs.go:123] Gathering logs for kubelet ...
	I0408 12:41:15.807562  427857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0408 12:41:15.859436  427857 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:41:15.859491  427857 out.go:239] * 
	* 
	W0408 12:41:15.859556  427857 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:41:15.859579  427857 out.go:239] * 
	* 
	W0408 12:41:15.860331  427857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:41:15.864274  427857 out.go:177] 
	W0408 12:41:15.865660  427857 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:41:15.865709  427857 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:41:15.865736  427857 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:41:15.867283  427857 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-384148 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 6 (252.725787ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:41:16.166953  432945 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-384148" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (275.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-527454 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-527454 --alsologtostderr -v=3: exit status 82 (2m0.561234382s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-527454"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:39:30.214936  432357 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:39:30.215062  432357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:39:30.215071  432357 out.go:304] Setting ErrFile to fd 2...
	I0408 12:39:30.215076  432357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:39:30.215306  432357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:39:30.215582  432357 out.go:298] Setting JSON to false
	I0408 12:39:30.215658  432357 mustload.go:65] Loading cluster: default-k8s-diff-port-527454
	I0408 12:39:30.215995  432357 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:39:30.216063  432357 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/config.json ...
	I0408 12:39:30.216227  432357 mustload.go:65] Loading cluster: default-k8s-diff-port-527454
	I0408 12:39:30.216326  432357 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:39:30.216352  432357 stop.go:39] StopHost: default-k8s-diff-port-527454
	I0408 12:39:30.216756  432357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:39:30.216809  432357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:39:30.232290  432357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
	I0408 12:39:30.232809  432357 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:39:30.233544  432357 main.go:141] libmachine: Using API Version  1
	I0408 12:39:30.233572  432357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:39:30.233941  432357 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:39:30.237705  432357 out.go:177] * Stopping node "default-k8s-diff-port-527454"  ...
	I0408 12:39:30.239249  432357 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0408 12:39:30.239333  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:39:30.239596  432357 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0408 12:39:30.239622  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:39:30.243061  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:39:30.243611  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:38:33 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:39:30.243640  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:39:30.243862  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:39:30.244058  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:39:30.244234  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:39:30.244422  432357 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:39:30.358930  432357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0408 12:39:30.404404  432357 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0408 12:39:30.485979  432357 main.go:141] libmachine: Stopping "default-k8s-diff-port-527454"...
	I0408 12:39:30.486024  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:39:30.488298  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Stop
	I0408 12:39:30.492365  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 0/120
	I0408 12:39:31.494373  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 1/120
	I0408 12:39:32.495832  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 2/120
	I0408 12:39:33.497615  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 3/120
	I0408 12:39:34.499174  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 4/120
	I0408 12:39:35.501573  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 5/120
	I0408 12:39:36.503434  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 6/120
	I0408 12:39:37.505539  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 7/120
	I0408 12:39:38.507630  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 8/120
	I0408 12:39:39.509133  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 9/120
	I0408 12:39:40.511103  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 10/120
	I0408 12:39:41.512923  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 11/120
	I0408 12:39:42.514298  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 12/120
	I0408 12:39:43.516752  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 13/120
	I0408 12:39:44.518754  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 14/120
	I0408 12:39:45.521138  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 15/120
	I0408 12:39:46.522540  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 16/120
	I0408 12:39:47.524300  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 17/120
	I0408 12:39:48.526047  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 18/120
	I0408 12:39:49.527448  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 19/120
	I0408 12:39:50.530054  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 20/120
	I0408 12:39:51.531635  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 21/120
	I0408 12:39:52.533334  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 22/120
	I0408 12:39:53.534726  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 23/120
	I0408 12:39:54.536304  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 24/120
	I0408 12:39:55.538518  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 25/120
	I0408 12:39:56.540336  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 26/120
	I0408 12:39:57.541940  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 27/120
	I0408 12:39:58.543451  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 28/120
	I0408 12:39:59.545093  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 29/120
	I0408 12:40:00.546766  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 30/120
	I0408 12:40:01.548649  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 31/120
	I0408 12:40:02.550292  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 32/120
	I0408 12:40:03.551927  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 33/120
	I0408 12:40:04.554380  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 34/120
	I0408 12:40:05.556748  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 35/120
	I0408 12:40:06.558161  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 36/120
	I0408 12:40:07.559835  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 37/120
	I0408 12:40:08.561820  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 38/120
	I0408 12:40:09.563225  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 39/120
	I0408 12:40:10.565728  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 40/120
	I0408 12:40:11.567580  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 41/120
	I0408 12:40:12.569451  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 42/120
	I0408 12:40:13.570956  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 43/120
	I0408 12:40:14.572845  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 44/120
	I0408 12:40:15.575325  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 45/120
	I0408 12:40:16.576945  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 46/120
	I0408 12:40:17.578532  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 47/120
	I0408 12:40:18.579904  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 48/120
	I0408 12:40:19.582516  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 49/120
	I0408 12:40:20.584160  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 50/120
	I0408 12:40:21.585620  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 51/120
	I0408 12:40:22.587234  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 52/120
	I0408 12:40:23.588898  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 53/120
	I0408 12:40:24.590438  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 54/120
	I0408 12:40:25.592787  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 55/120
	I0408 12:40:26.594386  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 56/120
	I0408 12:40:27.596039  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 57/120
	I0408 12:40:28.597721  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 58/120
	I0408 12:40:29.599647  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 59/120
	I0408 12:40:30.601230  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 60/120
	I0408 12:40:31.602865  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 61/120
	I0408 12:40:32.604638  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 62/120
	I0408 12:40:33.606253  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 63/120
	I0408 12:40:34.608062  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 64/120
	I0408 12:40:35.610334  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 65/120
	I0408 12:40:36.612236  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 66/120
	I0408 12:40:37.613644  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 67/120
	I0408 12:40:38.615173  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 68/120
	I0408 12:40:39.616733  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 69/120
	I0408 12:40:40.618134  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 70/120
	I0408 12:40:41.619608  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 71/120
	I0408 12:40:42.621302  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 72/120
	I0408 12:40:43.623064  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 73/120
	I0408 12:40:44.624662  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 74/120
	I0408 12:40:45.627150  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 75/120
	I0408 12:40:46.629038  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 76/120
	I0408 12:40:47.630700  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 77/120
	I0408 12:40:48.632444  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 78/120
	I0408 12:40:49.634266  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 79/120
	I0408 12:40:50.637048  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 80/120
	I0408 12:40:51.638490  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 81/120
	I0408 12:40:52.640087  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 82/120
	I0408 12:40:53.641537  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 83/120
	I0408 12:40:54.643212  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 84/120
	I0408 12:40:55.645595  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 85/120
	I0408 12:40:56.647391  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 86/120
	I0408 12:40:57.649512  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 87/120
	I0408 12:40:58.650959  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 88/120
	I0408 12:40:59.652805  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 89/120
	I0408 12:41:00.654141  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 90/120
	I0408 12:41:01.655773  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 91/120
	I0408 12:41:02.657283  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 92/120
	I0408 12:41:03.658972  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 93/120
	I0408 12:41:04.660930  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 94/120
	I0408 12:41:05.663326  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 95/120
	I0408 12:41:06.664770  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 96/120
	I0408 12:41:07.666305  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 97/120
	I0408 12:41:08.667758  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 98/120
	I0408 12:41:09.669098  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 99/120
	I0408 12:41:10.670552  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 100/120
	I0408 12:41:11.672226  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 101/120
	I0408 12:41:12.673866  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 102/120
	I0408 12:41:13.675192  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 103/120
	I0408 12:41:14.676860  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 104/120
	I0408 12:41:15.679200  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 105/120
	I0408 12:41:16.681479  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 106/120
	I0408 12:41:17.683119  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 107/120
	I0408 12:41:18.684703  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 108/120
	I0408 12:41:19.686174  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 109/120
	I0408 12:41:20.688557  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 110/120
	I0408 12:41:21.690268  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 111/120
	I0408 12:41:22.691884  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 112/120
	I0408 12:41:23.694365  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 113/120
	I0408 12:41:24.695886  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 114/120
	I0408 12:41:25.698161  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 115/120
	I0408 12:41:26.699678  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 116/120
	I0408 12:41:27.701422  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 117/120
	I0408 12:41:28.702954  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 118/120
	I0408 12:41:29.704516  432357 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for machine to stop 119/120
	I0408 12:41:30.705747  432357 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0408 12:41:30.705822  432357 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0408 12:41:30.707906  432357 out.go:177] 
	W0408 12:41:30.709657  432357 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0408 12:41:30.709676  432357 out.go:239] * 
	* 
	W0408 12:41:30.713198  432357 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:41:30.715016  432357 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-527454 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
E0408 12:41:32.310317  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:34.251125  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454: exit status 3 (18.547306367s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:41:49.264054  433124 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E0408 12:41:49.264088  433124 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527454" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-135234 --alsologtostderr -v=3
E0408 12:39:34.440851  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-135234 --alsologtostderr -v=3: exit status 82 (2m0.555112251s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-135234"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:39:34.314847  432461 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:39:34.314987  432461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:39:34.314997  432461 out.go:304] Setting ErrFile to fd 2...
	I0408 12:39:34.315003  432461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:39:34.315192  432461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:39:34.315422  432461 out.go:298] Setting JSON to false
	I0408 12:39:34.315499  432461 mustload.go:65] Loading cluster: no-preload-135234
	I0408 12:39:34.315817  432461 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:39:34.315885  432461 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/config.json ...
	I0408 12:39:34.316047  432461 mustload.go:65] Loading cluster: no-preload-135234
	I0408 12:39:34.316156  432461 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:39:34.316186  432461 stop.go:39] StopHost: no-preload-135234
	I0408 12:39:34.316579  432461 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:39:34.316639  432461 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:39:34.331784  432461 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0408 12:39:34.332275  432461 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:39:34.332859  432461 main.go:141] libmachine: Using API Version  1
	I0408 12:39:34.332883  432461 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:39:34.333253  432461 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:39:34.335915  432461 out.go:177] * Stopping node "no-preload-135234"  ...
	I0408 12:39:34.337089  432461 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0408 12:39:34.337123  432461 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:39:34.337363  432461 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0408 12:39:34.337394  432461 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:39:34.340294  432461 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:39:34.340778  432461 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:37:24 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:39:34.340806  432461 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:39:34.340977  432461 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:39:34.341144  432461 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:39:34.341290  432461 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:39:34.341427  432461 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:39:34.452939  432461 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0408 12:39:34.514192  432461 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0408 12:39:34.583766  432461 main.go:141] libmachine: Stopping "no-preload-135234"...
	I0408 12:39:34.583809  432461 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:39:34.585691  432461 main.go:141] libmachine: (no-preload-135234) Calling .Stop
	I0408 12:39:34.589580  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 0/120
	I0408 12:39:35.591311  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 1/120
	I0408 12:39:36.592822  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 2/120
	I0408 12:39:37.594881  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 3/120
	I0408 12:39:38.596722  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 4/120
	I0408 12:39:39.598392  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 5/120
	I0408 12:39:40.599879  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 6/120
	I0408 12:39:41.601290  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 7/120
	I0408 12:39:42.603182  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 8/120
	I0408 12:39:43.604686  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 9/120
	I0408 12:39:44.606947  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 10/120
	I0408 12:39:45.608254  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 11/120
	I0408 12:39:46.609746  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 12/120
	I0408 12:39:47.611458  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 13/120
	I0408 12:39:48.613123  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 14/120
	I0408 12:39:49.615371  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 15/120
	I0408 12:39:50.617060  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 16/120
	I0408 12:39:51.618680  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 17/120
	I0408 12:39:52.620258  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 18/120
	I0408 12:39:53.621724  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 19/120
	I0408 12:39:54.623392  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 20/120
	I0408 12:39:55.625194  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 21/120
	I0408 12:39:56.626983  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 22/120
	I0408 12:39:57.628837  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 23/120
	I0408 12:39:58.630749  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 24/120
	I0408 12:39:59.633205  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 25/120
	I0408 12:40:00.634850  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 26/120
	I0408 12:40:01.636272  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 27/120
	I0408 12:40:02.638186  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 28/120
	I0408 12:40:03.639904  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 29/120
	I0408 12:40:04.641846  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 30/120
	I0408 12:40:05.643518  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 31/120
	I0408 12:40:06.645177  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 32/120
	I0408 12:40:07.646854  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 33/120
	I0408 12:40:08.648771  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 34/120
	I0408 12:40:09.650962  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 35/120
	I0408 12:40:10.652500  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 36/120
	I0408 12:40:11.654404  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 37/120
	I0408 12:40:12.656309  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 38/120
	I0408 12:40:13.657683  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 39/120
	I0408 12:40:14.660290  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 40/120
	I0408 12:40:15.662123  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 41/120
	I0408 12:40:16.663892  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 42/120
	I0408 12:40:17.665900  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 43/120
	I0408 12:40:18.667587  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 44/120
	I0408 12:40:19.670136  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 45/120
	I0408 12:40:20.671840  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 46/120
	I0408 12:40:21.673336  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 47/120
	I0408 12:40:22.675151  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 48/120
	I0408 12:40:23.676935  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 49/120
	I0408 12:40:24.679233  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 50/120
	I0408 12:40:25.680820  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 51/120
	I0408 12:40:26.682646  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 52/120
	I0408 12:40:27.684227  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 53/120
	I0408 12:40:28.685544  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 54/120
	I0408 12:40:29.688310  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 55/120
	I0408 12:40:30.690055  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 56/120
	I0408 12:40:31.691864  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 57/120
	I0408 12:40:32.693904  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 58/120
	I0408 12:40:33.695571  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 59/120
	I0408 12:40:34.697537  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 60/120
	I0408 12:40:35.699043  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 61/120
	I0408 12:40:36.700652  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 62/120
	I0408 12:40:37.702342  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 63/120
	I0408 12:40:38.703939  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 64/120
	I0408 12:40:39.706387  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 65/120
	I0408 12:40:40.708087  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 66/120
	I0408 12:40:41.709646  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 67/120
	I0408 12:40:42.711224  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 68/120
	I0408 12:40:43.712951  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 69/120
	I0408 12:40:44.715436  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 70/120
	I0408 12:40:45.717006  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 71/120
	I0408 12:40:46.718903  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 72/120
	I0408 12:40:47.720507  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 73/120
	I0408 12:40:48.722453  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 74/120
	I0408 12:40:49.724636  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 75/120
	I0408 12:40:50.726288  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 76/120
	I0408 12:40:51.727744  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 77/120
	I0408 12:40:52.729449  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 78/120
	I0408 12:40:53.730921  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 79/120
	I0408 12:40:54.732520  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 80/120
	I0408 12:40:55.734499  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 81/120
	I0408 12:40:56.736079  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 82/120
	I0408 12:40:57.737702  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 83/120
	I0408 12:40:58.739132  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 84/120
	I0408 12:40:59.741194  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 85/120
	I0408 12:41:00.742730  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 86/120
	I0408 12:41:01.744321  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 87/120
	I0408 12:41:02.746691  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 88/120
	I0408 12:41:03.748411  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 89/120
	I0408 12:41:04.750778  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 90/120
	I0408 12:41:05.752865  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 91/120
	I0408 12:41:06.754433  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 92/120
	I0408 12:41:07.755917  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 93/120
	I0408 12:41:08.757284  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 94/120
	I0408 12:41:09.759248  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 95/120
	I0408 12:41:10.760850  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 96/120
	I0408 12:41:11.762369  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 97/120
	I0408 12:41:12.763968  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 98/120
	I0408 12:41:13.765326  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 99/120
	I0408 12:41:14.767171  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 100/120
	I0408 12:41:15.769447  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 101/120
	I0408 12:41:16.771131  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 102/120
	I0408 12:41:17.773089  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 103/120
	I0408 12:41:18.774682  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 104/120
	I0408 12:41:19.777064  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 105/120
	I0408 12:41:20.778533  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 106/120
	I0408 12:41:21.780100  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 107/120
	I0408 12:41:22.781866  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 108/120
	I0408 12:41:23.783405  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 109/120
	I0408 12:41:24.786080  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 110/120
	I0408 12:41:25.788115  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 111/120
	I0408 12:41:26.789792  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 112/120
	I0408 12:41:27.791497  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 113/120
	I0408 12:41:28.793182  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 114/120
	I0408 12:41:29.795501  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 115/120
	I0408 12:41:30.796810  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 116/120
	I0408 12:41:31.798539  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 117/120
	I0408 12:41:32.800049  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 118/120
	I0408 12:41:33.801634  432461 main.go:141] libmachine: (no-preload-135234) Waiting for machine to stop 119/120
	I0408 12:41:34.802985  432461 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0408 12:41:34.803054  432461 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0408 12:41:34.805188  432461 out.go:177] 
	W0408 12:41:34.806490  432461 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0408 12:41:34.806531  432461 out.go:239] * 
	* 
	W0408 12:41:34.810133  432461 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:41:34.811355  432461 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-135234 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234: exit status 3 (18.547517829s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:41:53.360171  433170 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host
	E0408 12:41:53.360196  433170 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-135234" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-488947 --alsologtostderr -v=3
E0408 12:39:43.990616  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:39:53.816188  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:39:54.921308  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:40:07.590325  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 12:40:29.654075  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:29.659469  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:29.669855  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:29.690611  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:29.731014  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:29.811474  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:29.972219  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:30.292893  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:30.933681  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:32.214560  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:34.775037  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:35.881540  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:40:39.896186  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:40:50.136579  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:41:05.911421  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:41:10.617234  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:41:11.828866  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:11.834199  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:11.844595  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:11.865017  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:11.905605  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:11.986029  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:12.146740  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:12.467196  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:13.107617  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:14.388313  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:15.736650  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-488947 --alsologtostderr -v=3: exit status 82 (2m0.530094705s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-488947"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:39:43.958935  432546 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:39:43.959095  432546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:39:43.959105  432546 out.go:304] Setting ErrFile to fd 2...
	I0408 12:39:43.959109  432546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:39:43.959334  432546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:39:43.959624  432546 out.go:298] Setting JSON to false
	I0408 12:39:43.959741  432546 mustload.go:65] Loading cluster: embed-certs-488947
	I0408 12:39:43.960130  432546 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:39:43.960210  432546 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/config.json ...
	I0408 12:39:43.960394  432546 mustload.go:65] Loading cluster: embed-certs-488947
	I0408 12:39:43.960525  432546 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:39:43.960562  432546 stop.go:39] StopHost: embed-certs-488947
	I0408 12:39:43.961040  432546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:39:43.961094  432546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:39:43.976641  432546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0408 12:39:43.977187  432546 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:39:43.977884  432546 main.go:141] libmachine: Using API Version  1
	I0408 12:39:43.977923  432546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:39:43.978316  432546 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:39:43.981223  432546 out.go:177] * Stopping node "embed-certs-488947"  ...
	I0408 12:39:43.982897  432546 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0408 12:39:43.982948  432546 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:39:43.983389  432546 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0408 12:39:43.983433  432546 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:39:43.986850  432546 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:39:43.987377  432546 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:38:08 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:39:43.987420  432546 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:39:43.987706  432546 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:39:43.987938  432546 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:39:43.988147  432546 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:39:43.988325  432546 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:39:44.098847  432546 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0408 12:39:44.153941  432546 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0408 12:39:44.212126  432546 main.go:141] libmachine: Stopping "embed-certs-488947"...
	I0408 12:39:44.212178  432546 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:39:44.213785  432546 main.go:141] libmachine: (embed-certs-488947) Calling .Stop
	I0408 12:39:44.217611  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 0/120
	I0408 12:39:45.219267  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 1/120
	I0408 12:39:46.221210  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 2/120
	I0408 12:39:47.222748  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 3/120
	I0408 12:39:48.224387  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 4/120
	I0408 12:39:49.226674  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 5/120
	I0408 12:39:50.228361  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 6/120
	I0408 12:39:51.230043  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 7/120
	I0408 12:39:52.231948  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 8/120
	I0408 12:39:53.233552  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 9/120
	I0408 12:39:54.234977  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 10/120
	I0408 12:39:55.236431  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 11/120
	I0408 12:39:56.237982  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 12/120
	I0408 12:39:57.239541  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 13/120
	I0408 12:39:58.241303  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 14/120
	I0408 12:39:59.243747  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 15/120
	I0408 12:40:00.245215  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 16/120
	I0408 12:40:01.246841  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 17/120
	I0408 12:40:02.248386  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 18/120
	I0408 12:40:03.250179  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 19/120
	I0408 12:40:04.251533  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 20/120
	I0408 12:40:05.253021  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 21/120
	I0408 12:40:06.254653  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 22/120
	I0408 12:40:07.256195  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 23/120
	I0408 12:40:08.258182  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 24/120
	I0408 12:40:09.260456  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 25/120
	I0408 12:40:10.262141  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 26/120
	I0408 12:40:11.263754  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 27/120
	I0408 12:40:12.265740  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 28/120
	I0408 12:40:13.267509  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 29/120
	I0408 12:40:14.269301  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 30/120
	I0408 12:40:15.271180  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 31/120
	I0408 12:40:16.273074  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 32/120
	I0408 12:40:17.274499  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 33/120
	I0408 12:40:18.276431  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 34/120
	I0408 12:40:19.279156  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 35/120
	I0408 12:40:20.280855  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 36/120
	I0408 12:40:21.282528  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 37/120
	I0408 12:40:22.284294  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 38/120
	I0408 12:40:23.285992  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 39/120
	I0408 12:40:24.287788  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 40/120
	I0408 12:40:25.289153  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 41/120
	I0408 12:40:26.290754  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 42/120
	I0408 12:40:27.292667  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 43/120
	I0408 12:40:28.294368  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 44/120
	I0408 12:40:29.296575  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 45/120
	I0408 12:40:30.297924  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 46/120
	I0408 12:40:31.299768  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 47/120
	I0408 12:40:32.301480  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 48/120
	I0408 12:40:33.303078  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 49/120
	I0408 12:40:34.304709  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 50/120
	I0408 12:40:35.306491  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 51/120
	I0408 12:40:36.308251  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 52/120
	I0408 12:40:37.310702  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 53/120
	I0408 12:40:38.312444  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 54/120
	I0408 12:40:39.314920  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 55/120
	I0408 12:40:40.316587  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 56/120
	I0408 12:40:41.318380  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 57/120
	I0408 12:40:42.320340  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 58/120
	I0408 12:40:43.321695  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 59/120
	I0408 12:40:44.323012  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 60/120
	I0408 12:40:45.324696  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 61/120
	I0408 12:40:46.326574  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 62/120
	I0408 12:40:47.328261  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 63/120
	I0408 12:40:48.329758  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 64/120
	I0408 12:40:49.331877  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 65/120
	I0408 12:40:50.333372  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 66/120
	I0408 12:40:51.334904  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 67/120
	I0408 12:40:52.336468  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 68/120
	I0408 12:40:53.337873  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 69/120
	I0408 12:40:54.340336  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 70/120
	I0408 12:40:55.341784  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 71/120
	I0408 12:40:56.343561  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 72/120
	I0408 12:40:57.345110  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 73/120
	I0408 12:40:58.346573  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 74/120
	I0408 12:40:59.348872  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 75/120
	I0408 12:41:00.350436  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 76/120
	I0408 12:41:01.352233  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 77/120
	I0408 12:41:02.353581  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 78/120
	I0408 12:41:03.355029  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 79/120
	I0408 12:41:04.356551  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 80/120
	I0408 12:41:05.358347  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 81/120
	I0408 12:41:06.359702  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 82/120
	I0408 12:41:07.361363  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 83/120
	I0408 12:41:08.362898  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 84/120
	I0408 12:41:09.364895  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 85/120
	I0408 12:41:10.366602  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 86/120
	I0408 12:41:11.368054  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 87/120
	I0408 12:41:12.369741  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 88/120
	I0408 12:41:13.371425  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 89/120
	I0408 12:41:14.372908  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 90/120
	I0408 12:41:15.374665  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 91/120
	I0408 12:41:16.376194  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 92/120
	I0408 12:41:17.378088  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 93/120
	I0408 12:41:18.379842  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 94/120
	I0408 12:41:19.382020  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 95/120
	I0408 12:41:20.383674  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 96/120
	I0408 12:41:21.385194  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 97/120
	I0408 12:41:22.386972  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 98/120
	I0408 12:41:23.388530  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 99/120
	I0408 12:41:24.390720  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 100/120
	I0408 12:41:25.392373  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 101/120
	I0408 12:41:26.394202  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 102/120
	I0408 12:41:27.395913  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 103/120
	I0408 12:41:28.397458  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 104/120
	I0408 12:41:29.399682  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 105/120
	I0408 12:41:30.401185  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 106/120
	I0408 12:41:31.402778  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 107/120
	I0408 12:41:32.404259  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 108/120
	I0408 12:41:33.406063  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 109/120
	I0408 12:41:34.407436  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 110/120
	I0408 12:41:35.408870  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 111/120
	I0408 12:41:36.410655  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 112/120
	I0408 12:41:37.412872  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 113/120
	I0408 12:41:38.414298  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 114/120
	I0408 12:41:39.416501  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 115/120
	I0408 12:41:40.418055  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 116/120
	I0408 12:41:41.419582  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 117/120
	I0408 12:41:42.421096  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 118/120
	I0408 12:41:43.422725  432546 main.go:141] libmachine: (embed-certs-488947) Waiting for machine to stop 119/120
	I0408 12:41:44.424061  432546 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0408 12:41:44.424118  432546 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0408 12:41:44.426226  432546 out.go:177] 
	W0408 12:41:44.427601  432546 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0408 12:41:44.427622  432546 out.go:239] * 
	* 
	W0408 12:41:44.430933  432546 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:41:44.432248  432546 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-488947 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947
E0408 12:41:44.491501  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947: exit status 3 (18.654360968s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:42:03.088050  433232 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host
	E0408 12:42:03.088073  433232 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-488947" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-384148 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-384148 create -f testdata/busybox.yaml: exit status 1 (49.135428ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-384148" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-384148 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 6 (242.159219ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:41:16.459302  432985 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-384148" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 6 (238.934758ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:41:16.699415  433015 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-384148" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (70.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-384148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0408 12:41:16.949133  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:22.069459  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:41:24.010340  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:24.015723  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:24.026029  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:24.046433  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:24.086982  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:24.167378  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:24.327857  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:24.648551  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:25.289165  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:26.569688  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:41:29.130054  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-384148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m10.024182408s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-384148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-384148 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-384148 describe deploy/metrics-server -n kube-system: exit status 1 (47.212878ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-384148" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-384148 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 6 (246.266272ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:42:27.014866  433750 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-384148" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (70.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
E0408 12:41:51.578368  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454: exit status 3 (3.200121747s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:41:52.464116  433262 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E0408 12:41:52.464136  433262 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-527454 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0408 12:41:52.791278  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-527454 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154689263s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-527454 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454: exit status 3 (3.060831989s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:42:01.680140  433393 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E0408 12:42:01.680161  433393 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527454" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234: exit status 3 (3.200246265s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:41:56.560134  433328 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host
	E0408 12:41:56.560159  433328 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-135234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0408 12:41:57.801797  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-135234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154271795s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-135234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234: exit status 3 (3.061033872s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:42:05.776131  433480 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host
	E0408 12:42:05.776149  433480 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-135234" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947
E0408 12:42:04.971891  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947: exit status 3 (3.168334999s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:42:06.256088  433510 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host
	E0408 12:42:06.256115  433510 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-488947 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-488947 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153892529s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-488947 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947: exit status 3 (3.061683917s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 12:42:15.472138  433627 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host
	E0408 12:42:15.472165  433627 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.159:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-488947" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (800.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-384148 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0408 12:42:33.752071  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:42:41.603643  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:41.609008  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:41.619358  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:41.639774  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:41.680145  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:41.760546  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:41.921480  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:42.242457  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:42.883485  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:44.164691  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:45.933137  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:42:46.725072  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:42:51.846239  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:43:02.086942  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:43:06.832454  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 12:43:13.499264  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:43:22.066931  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:43:22.567823  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:43:31.892186  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:43:44.543374  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 12:43:49.752323  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:43:55.672707  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:43:59.576894  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:44:03.529278  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:44:07.853667  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:44:13.957414  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:44:29.879833  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 12:44:41.642897  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:45:25.449672  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:45:29.654162  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:45:57.340498  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:46:11.828322  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:46:24.009687  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:46:39.513289  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:46:51.694692  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
E0408 12:47:41.604614  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:48:06.832522  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 12:48:09.290497  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
E0408 12:48:22.066206  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:48:31.892056  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:48:44.543194  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 12:49:13.958467  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:50:29.654438  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:51:11.828397  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
E0408 12:51:24.010355  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-384148 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (13m16.52652284s)

                                                
                                                
-- stdout --
	* [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-384148" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:42:31.610028  433881 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:42:31.610291  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610300  433881 out.go:304] Setting ErrFile to fd 2...
	I0408 12:42:31.610304  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610590  433881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:42:31.611834  433881 out.go:298] Setting JSON to false
	I0408 12:42:31.613323  433881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8695,"bootTime":1712571457,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:42:31.613413  433881 start.go:139] virtualization: kvm guest
	I0408 12:42:31.615441  433881 out.go:177] * [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:42:31.617429  433881 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:42:31.617459  433881 notify.go:220] Checking for updates...
	I0408 12:42:31.618918  433881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:42:31.620434  433881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:42:31.621883  433881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:42:31.623381  433881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:42:31.624858  433881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:42:31.626731  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:42:31.627141  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.627193  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.642980  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0408 12:42:31.643395  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.644144  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.644166  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.644557  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.644768  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.646980  433881 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 12:42:31.648378  433881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:42:31.648694  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.648732  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.663924  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0408 12:42:31.664361  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.664884  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.664910  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.665218  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.665445  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.701652  433881 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:42:31.703025  433881 start.go:297] selected driver: kvm2
	I0408 12:42:31.703041  433881 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.703192  433881 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:42:31.703924  433881 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.704018  433881 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:42:31.719599  433881 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:42:31.720001  433881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:42:31.720084  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:42:31.720102  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:42:31.720156  433881 start.go:340] cluster config:
	{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.720330  433881 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.722299  433881 out.go:177] * Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	I0408 12:42:31.723540  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:42:31.723577  433881 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:42:31.723594  433881 cache.go:56] Caching tarball of preloaded images
	I0408 12:42:31.723718  433881 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:42:31.723733  433881 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:42:31.723846  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:42:31.724039  433881 start.go:360] acquireMachinesLock for old-k8s-version-384148: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:47:18.781552  433881 start.go:364] duration metric: took 4m47.057472647s to acquireMachinesLock for "old-k8s-version-384148"
	I0408 12:47:18.781636  433881 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:18.781645  433881 fix.go:54] fixHost starting: 
	I0408 12:47:18.782123  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:18.782168  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:18.804263  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0408 12:47:18.804759  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:18.805376  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:47:18.805407  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:18.805815  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:18.806091  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:18.806265  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetState
	I0408 12:47:18.809884  433881 fix.go:112] recreateIfNeeded on old-k8s-version-384148: state=Stopped err=<nil>
	I0408 12:47:18.809915  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	W0408 12:47:18.810103  433881 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:18.812906  433881 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-384148" ...
	I0408 12:47:18.814842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .Start
	I0408 12:47:18.815096  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring networks are active...
	I0408 12:47:18.816155  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network default is active
	I0408 12:47:18.816608  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network mk-old-k8s-version-384148 is active
	I0408 12:47:18.817061  433881 main.go:141] libmachine: (old-k8s-version-384148) Getting domain xml...
	I0408 12:47:18.817951  433881 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:47:20.144750  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting to get IP...
	I0408 12:47:20.145850  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.146334  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.146403  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.146320  435057 retry.go:31] will retry after 230.92081ms: waiting for machine to come up
	I0408 12:47:20.378905  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.379518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.379572  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.379474  435057 retry.go:31] will retry after 383.208004ms: waiting for machine to come up
	I0408 12:47:20.764287  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.764883  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.764936  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.764858  435057 retry.go:31] will retry after 430.674899ms: waiting for machine to come up
	I0408 12:47:21.197738  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.198231  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.198255  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.198190  435057 retry.go:31] will retry after 553.905508ms: waiting for machine to come up
	I0408 12:47:21.754065  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.754814  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.754849  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.754719  435057 retry.go:31] will retry after 678.896106ms: waiting for machine to come up
	I0408 12:47:22.435899  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:22.436481  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:22.436518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:22.436426  435057 retry.go:31] will retry after 624.721191ms: waiting for machine to come up
	I0408 12:47:23.063619  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:23.064268  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:23.064290  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:23.064183  435057 retry.go:31] will retry after 1.072067437s: waiting for machine to come up
	I0408 12:47:24.137999  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:24.138573  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:24.138607  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:24.138517  435057 retry.go:31] will retry after 1.238721936s: waiting for machine to come up
	I0408 12:47:25.378512  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:25.378929  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:25.378956  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:25.378819  435057 retry.go:31] will retry after 1.314708825s: waiting for machine to come up
	I0408 12:47:26.695466  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:26.835234  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:26.835265  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:26.695884  435057 retry.go:31] will retry after 1.93787314s: waiting for machine to come up
	I0408 12:47:28.635479  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:28.636019  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:28.636052  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:28.635935  435057 retry.go:31] will retry after 1.906126524s: waiting for machine to come up
	I0408 12:47:30.544699  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:30.545145  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:30.545165  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:30.545084  435057 retry.go:31] will retry after 3.291404288s: waiting for machine to come up
	I0408 12:47:33.837729  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:33.838183  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:33.838213  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:33.838133  435057 retry.go:31] will retry after 3.949072436s: waiting for machine to come up
	I0408 12:47:37.789177  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789704  433881 main.go:141] libmachine: (old-k8s-version-384148) Found IP for machine: 192.168.39.245
	I0408 12:47:37.789740  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has current primary IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789750  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserving static IP address...
	I0408 12:47:37.790172  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.790212  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | skip adding static IP to network mk-old-k8s-version-384148 - found existing host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"}
	I0408 12:47:37.790227  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserved static IP address: 192.168.39.245
	I0408 12:47:37.790244  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting for SSH to be available...
	I0408 12:47:37.790259  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Getting to WaitForSSH function...
	I0408 12:47:37.792465  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792793  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.792829  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792884  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH client type: external
	I0408 12:47:37.792932  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa (-rw-------)
	I0408 12:47:37.792974  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:37.793007  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | About to run SSH command:
	I0408 12:47:37.793018  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | exit 0
	I0408 12:47:37.920427  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:37.920854  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:47:37.921644  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:37.924168  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924631  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.924663  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924954  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:47:37.925170  433881 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:37.925191  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:37.925526  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:37.928176  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928552  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.928583  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928740  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:37.928916  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929095  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929260  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:37.929421  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:37.929626  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:37.929637  433881 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:38.044349  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:38.044378  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044695  433881 buildroot.go:166] provisioning hostname "old-k8s-version-384148"
	I0408 12:47:38.044728  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044955  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.047788  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048116  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.048149  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048291  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.048487  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.049024  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.049242  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.049258  433881 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-384148 && echo "old-k8s-version-384148" | sudo tee /etc/hostname
	I0408 12:47:38.175102  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-384148
	
	I0408 12:47:38.175132  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.178015  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178431  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.178461  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178659  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.178872  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179057  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179198  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.179347  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.179578  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.179604  433881 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-384148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-384148/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-384148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:38.306997  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:38.307037  433881 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:38.307072  433881 buildroot.go:174] setting up certificates
	I0408 12:47:38.307088  433881 provision.go:84] configureAuth start
	I0408 12:47:38.307099  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.307464  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:38.310078  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310595  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.310643  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.313155  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313521  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.313551  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313694  433881 provision.go:143] copyHostCerts
	I0408 12:47:38.313748  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:38.313768  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:38.313829  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:38.313919  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:38.313927  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:38.313945  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:38.314007  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:38.314014  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:38.314031  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:38.314080  433881 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-384148 san=[127.0.0.1 192.168.39.245 localhost minikube old-k8s-version-384148]
	I0408 12:47:38.748791  433881 provision.go:177] copyRemoteCerts
	I0408 12:47:38.748865  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:38.748895  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.752034  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752458  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.752499  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752695  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.752900  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.753075  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.753266  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:38.849144  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:47:38.880279  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:38.907293  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:38.936116  433881 provision.go:87] duration metric: took 629.014723ms to configureAuth
	I0408 12:47:38.936152  433881 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:38.936321  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:47:38.936403  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.939013  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939399  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.939457  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939593  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.939861  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940059  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940215  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.940377  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.940622  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.940648  433881 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:39.241516  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:39.241543  433881 machine.go:97] duration metric: took 1.316359736s to provisionDockerMachine
	I0408 12:47:39.241554  433881 start.go:293] postStartSetup for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:47:39.241566  433881 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:39.241585  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.241901  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:39.241935  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.244908  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245307  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.245336  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245486  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.245692  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.245890  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.246051  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.333612  433881 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:39.338826  433881 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:39.338853  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:39.338919  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:39.338988  433881 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:39.339071  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:39.352064  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:39.380881  433881 start.go:296] duration metric: took 139.30723ms for postStartSetup
	I0408 12:47:39.380939  433881 fix.go:56] duration metric: took 20.599293118s for fixHost
	I0408 12:47:39.380970  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.384147  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384556  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.384610  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384795  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.385010  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385212  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385411  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.385627  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:39.385869  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:39.385885  433881 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 12:47:39.501982  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580459.470646239
	
	I0408 12:47:39.502031  433881 fix.go:216] guest clock: 1712580459.470646239
	I0408 12:47:39.502042  433881 fix.go:229] Guest: 2024-04-08 12:47:39.470646239 +0000 UTC Remote: 2024-04-08 12:47:39.38094595 +0000 UTC m=+307.818603739 (delta=89.700289ms)
	I0408 12:47:39.502073  433881 fix.go:200] guest clock delta is within tolerance: 89.700289ms
	I0408 12:47:39.502084  433881 start.go:83] releasing machines lock for "old-k8s-version-384148", held for 20.720472846s
	I0408 12:47:39.502114  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.502407  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:39.505864  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506319  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.506352  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506704  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507318  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507574  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507677  433881 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:39.507767  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.507908  433881 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:39.507932  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.510993  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511077  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511476  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511522  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511563  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511589  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511743  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511923  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512084  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512093  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512239  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512246  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.512413  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.633304  433881 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:39.642014  433881 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:39.804068  433881 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:39.812237  433881 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:39.812324  433881 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:39.835586  433881 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:39.835621  433881 start.go:494] detecting cgroup driver to use...
	I0408 12:47:39.835721  433881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:39.860378  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:39.882019  433881 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:39.882096  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:39.898112  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:39.913562  433881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:40.047449  433881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:40.188730  433881 docker.go:233] disabling docker service ...
	I0408 12:47:40.188822  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:40.205050  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:40.222432  433881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:40.386332  433881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:40.561583  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:40.582135  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:40.611648  433881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:47:40.611751  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.629357  433881 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:40.629458  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.646030  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.661349  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.674997  433881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:40.688255  433881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:40.706703  433881 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:40.706763  433881 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:40.724839  433881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:40.738018  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:40.906300  433881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:41.073054  433881 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:41.073141  433881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:41.078610  433881 start.go:562] Will wait 60s for crictl version
	I0408 12:47:41.078679  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:41.083133  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:41.126948  433881 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:41.127101  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.160091  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.195044  433881 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:47:41.196514  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:41.199376  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.199831  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:41.199860  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.200145  433881 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:41.204867  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:41.221274  433881 kubeadm.go:877] updating cluster {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:41.221469  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:47:41.221550  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:41.275430  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:41.275531  433881 ssh_runner.go:195] Run: which lz4
	I0408 12:47:41.280606  433881 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 12:47:41.285549  433881 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:41.285606  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:47:43.224219  433881 crio.go:462] duration metric: took 1.943671791s to copy over tarball
	I0408 12:47:43.224306  433881 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:46.621677  433881 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.397321627s)
	I0408 12:47:46.881725  433881 crio.go:469] duration metric: took 3.657463869s to extract the tarball
	I0408 12:47:46.881748  433881 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:46.936087  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:46.980999  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:46.981031  433881 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:46.981086  433881 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.981115  433881 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.981160  433881 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:46.981180  433881 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.981197  433881 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.981206  433881 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.981332  433881 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.981525  433881 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.983461  433881 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983449  433881 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.983481  433881 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.983501  433881 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.983517  433881 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.983495  433881 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.215815  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.218682  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.218812  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:47:47.226057  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.237986  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.249572  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.255059  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.331367  433881 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:47:47.331429  433881 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.331484  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.403757  433881 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:47:47.403846  433881 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.403899  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.408643  433881 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:47:47.408702  433881 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:47:47.408755  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443551  433881 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:47:47.443589  433881 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:47:47.443609  433881 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.443626  433881 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.443678  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443682  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453637  433881 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:47:47.453695  433881 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.453749  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453825  433881 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:47:47.453864  433881 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.453884  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.453908  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453990  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.454014  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:47:47.456910  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.457446  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.569243  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:47:47.569295  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.569320  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.583668  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:47:47.583967  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:47:47.589545  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:47:47.589707  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:47:47.638036  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:47:47.639955  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:47:47.860567  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:48.010273  433881 cache_images.go:92] duration metric: took 1.029223281s to LoadCachedImages
	W0408 12:47:48.010419  433881 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0408 12:47:48.010440  433881 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.20.0 crio true true} ...
	I0408 12:47:48.010631  433881 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-384148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:48.010729  433881 ssh_runner.go:195] Run: crio config
	I0408 12:47:48.065431  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:47:48.065461  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:48.065478  433881 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:48.065504  433881 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-384148 NodeName:old-k8s-version-384148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:47:48.065684  433881 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-384148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:48.065779  433881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:47:48.080840  433881 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:48.080950  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:48.094581  433881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 12:47:48.117392  433881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:48.138262  433881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:47:48.165039  433881 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:48.171191  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:48.189417  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:48.341553  433881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:48.363215  433881 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148 for IP: 192.168.39.245
	I0408 12:47:48.363249  433881 certs.go:194] generating shared ca certs ...
	I0408 12:47:48.363272  433881 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:48.363473  433881 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:48.363571  433881 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:48.363589  433881 certs.go:256] generating profile certs ...
	I0408 12:47:48.426881  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key
	I0408 12:47:48.427040  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1
	I0408 12:47:48.427110  433881 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key
	I0408 12:47:48.427261  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:48.427310  433881 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:48.427321  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:48.427354  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:48.427422  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:48.427462  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:48.427523  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:48.428524  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:48.476520  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:48.522452  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:48.561710  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:48.607052  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 12:47:48.651541  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:48.704207  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:48.742684  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:48.772703  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:48.803476  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:48.833154  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:48.863183  433881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:48.885940  433881 ssh_runner.go:195] Run: openssl version
	I0408 12:47:48.894847  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:48.910969  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916386  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916449  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.923008  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:48.936122  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:48.952344  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957735  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957815  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.964720  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:48.978862  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:48.993113  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998835  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998906  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:49.005710  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:49.019197  433881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:49.024728  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:49.031831  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:49.038736  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:49.045946  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:49.053040  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:49.060064  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:49.066969  433881 kubeadm.go:391] StartCluster: {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:49.067090  433881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:49.067156  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.107266  433881 cri.go:89] found id: ""
	I0408 12:47:49.107336  433881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:49.120092  433881 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:49.120126  433881 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:49.120132  433881 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:49.120190  433881 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:49.133500  433881 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:49.134686  433881 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:49.135619  433881 kubeconfig.go:62] /home/jenkins/minikube-integration/18588-368424/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-384148" cluster setting kubeconfig missing "old-k8s-version-384148" context setting]
	I0408 12:47:49.136897  433881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:49.139048  433881 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:49.154878  433881 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.245
	I0408 12:47:49.154925  433881 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:49.154941  433881 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:49.155009  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.207364  433881 cri.go:89] found id: ""
	I0408 12:47:49.207445  433881 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:49.228390  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:49.245160  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:49.245193  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:49.245266  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:49.256832  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:49.256913  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:49.268773  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:49.282821  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:49.282898  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:49.297896  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.312075  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:49.312158  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.327398  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:49.341467  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:49.341604  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:49.354096  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:49.366717  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:49.514951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.442724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.716276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.833506  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.927655  433881 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:50.927798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:51.428588  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:51.928035  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.427844  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.928718  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.927869  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.428707  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.928798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.427884  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.928273  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:56.427941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:56.927927  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.428068  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.928800  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.427871  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.927822  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.428740  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.927924  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.427948  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.928792  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:01.428657  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:01.928628  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.427857  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.927917  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.428824  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.428084  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.928751  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.428193  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.927854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.427836  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.928222  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.427868  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.927863  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.428510  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.928662  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.427932  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.928613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.928934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.428085  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.928656  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.427975  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.927923  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.428494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.928608  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.427852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.927874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.427855  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:16.427929  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:16.928269  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.427867  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.428658  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.928649  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.428746  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.928734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.427874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.927842  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:21.427823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:21.928654  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.428887  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.928103  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.428482  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.928236  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.428613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.928054  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.428566  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.927852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.428729  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.427853  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.928281  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.428354  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.928419  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.427934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.427840  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.928618  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.928067  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.428776  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.928583  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.428774  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.928033  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.428825  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.928696  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.428311  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.928915  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.427831  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.928429  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.428001  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.927802  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.427845  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.928013  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:41.428569  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:41.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.428794  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.927856  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.428217  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.928796  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.428756  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.927829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.428563  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.927812  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:46.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:46.928607  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.427829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.928499  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.428241  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.928393  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.428488  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.927941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.428003  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.928815  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:50.928888  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:50.970680  433881 cri.go:89] found id: ""
	I0408 12:48:50.970713  433881 logs.go:276] 0 containers: []
	W0408 12:48:50.970725  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:50.970733  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:50.970799  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:51.009804  433881 cri.go:89] found id: ""
	I0408 12:48:51.009838  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.009848  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:51.009854  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:51.009909  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:51.049581  433881 cri.go:89] found id: ""
	I0408 12:48:51.049617  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.049626  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:51.049633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:51.049706  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:51.086286  433881 cri.go:89] found id: ""
	I0408 12:48:51.086314  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.086323  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:51.086329  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:51.086395  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:51.126888  433881 cri.go:89] found id: ""
	I0408 12:48:51.126916  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.126927  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:51.126935  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:51.126998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:51.168650  433881 cri.go:89] found id: ""
	I0408 12:48:51.168684  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.168695  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:51.168702  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:51.168759  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:51.205661  433881 cri.go:89] found id: ""
	I0408 12:48:51.205693  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.205706  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:51.205714  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:51.205782  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:51.245659  433881 cri.go:89] found id: ""
	I0408 12:48:51.245699  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.245711  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:51.245725  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:51.245742  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:51.310079  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:51.310120  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:51.354093  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:51.354124  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:51.405031  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:51.405074  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:51.421147  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:51.421183  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:51.547658  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:54.047880  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:54.062872  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:54.062960  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:54.109041  433881 cri.go:89] found id: ""
	I0408 12:48:54.109068  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.109079  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:54.109087  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:54.109209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:54.150194  433881 cri.go:89] found id: ""
	I0408 12:48:54.150223  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.150231  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:54.150237  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:54.150292  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:54.191735  433881 cri.go:89] found id: ""
	I0408 12:48:54.191767  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.191785  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:54.191792  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:54.191872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:54.251766  433881 cri.go:89] found id: ""
	I0408 12:48:54.251798  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.251807  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:54.251813  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:54.251878  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:54.292179  433881 cri.go:89] found id: ""
	I0408 12:48:54.292215  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.292229  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:54.292237  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:54.292311  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:54.329338  433881 cri.go:89] found id: ""
	I0408 12:48:54.329368  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.329380  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:54.329389  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:54.329458  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:54.377094  433881 cri.go:89] found id: ""
	I0408 12:48:54.377132  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.377144  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:54.377153  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:54.377227  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:54.415835  433881 cri.go:89] found id: ""
	I0408 12:48:54.415865  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.415873  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:54.415884  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:54.415896  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:54.471985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:54.472040  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:54.487674  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:54.487727  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:54.575138  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:54.575161  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:54.575176  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:54.647315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:54.647364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:57.189969  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:57.204122  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:57.204201  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:57.241210  433881 cri.go:89] found id: ""
	I0408 12:48:57.241243  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.241252  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:57.241258  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:57.241310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:57.279553  433881 cri.go:89] found id: ""
	I0408 12:48:57.279591  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.279600  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:57.279606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:57.279658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:57.323516  433881 cri.go:89] found id: ""
	I0408 12:48:57.323560  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.323585  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:57.323593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:57.323663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:57.363723  433881 cri.go:89] found id: ""
	I0408 12:48:57.363755  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.363766  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:57.363772  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:57.363839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:57.400144  433881 cri.go:89] found id: ""
	I0408 12:48:57.400178  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.400190  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:57.400208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:57.400274  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:57.441875  433881 cri.go:89] found id: ""
	I0408 12:48:57.441907  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.441919  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:57.441928  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:57.441999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:57.478024  433881 cri.go:89] found id: ""
	I0408 12:48:57.478057  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.478066  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:57.478074  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:57.478144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:57.516602  433881 cri.go:89] found id: ""
	I0408 12:48:57.516633  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.516642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:57.516652  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:57.516666  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:57.573832  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:57.573883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:57.590751  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:57.590793  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:57.670650  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:57.670679  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:57.670698  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:57.746440  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:57.746488  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:00.291359  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:00.306024  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:00.306116  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:00.352262  433881 cri.go:89] found id: ""
	I0408 12:49:00.352294  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.352305  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:00.352314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:00.352390  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:00.392371  433881 cri.go:89] found id: ""
	I0408 12:49:00.392403  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.392415  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:00.392423  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:00.392488  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:00.434848  433881 cri.go:89] found id: ""
	I0408 12:49:00.434876  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.434885  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:00.434892  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:00.434951  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:00.476998  433881 cri.go:89] found id: ""
	I0408 12:49:00.477032  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.477045  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:00.477054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:00.477128  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:00.514520  433881 cri.go:89] found id: ""
	I0408 12:49:00.514560  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.514569  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:00.514575  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:00.514643  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:00.555942  433881 cri.go:89] found id: ""
	I0408 12:49:00.555981  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.555996  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:00.556005  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:00.556074  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:00.603944  433881 cri.go:89] found id: ""
	I0408 12:49:00.604053  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.604079  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:00.604097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:00.604193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:00.660591  433881 cri.go:89] found id: ""
	I0408 12:49:00.660628  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.660642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:00.660655  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:00.660677  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:00.731774  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:00.731821  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:00.747891  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:00.747947  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:00.827051  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:00.827085  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:00.827100  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:00.907231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:00.907280  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:03.460014  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:03.474615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:03.474716  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:03.513072  433881 cri.go:89] found id: ""
	I0408 12:49:03.513106  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.513115  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:03.513122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:03.513179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:03.549307  433881 cri.go:89] found id: ""
	I0408 12:49:03.549349  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.549358  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:03.549364  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:03.549508  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:03.587463  433881 cri.go:89] found id: ""
	I0408 12:49:03.587503  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.587516  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:03.587524  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:03.587601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:03.628171  433881 cri.go:89] found id: ""
	I0408 12:49:03.628202  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.628211  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:03.628217  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:03.628284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:03.663630  433881 cri.go:89] found id: ""
	I0408 12:49:03.663661  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.663672  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:03.663680  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:03.663762  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:03.704078  433881 cri.go:89] found id: ""
	I0408 12:49:03.704112  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.704124  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:03.704134  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:03.704202  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:03.744820  433881 cri.go:89] found id: ""
	I0408 12:49:03.744856  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.744868  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:03.744877  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:03.744945  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:03.785826  433881 cri.go:89] found id: ""
	I0408 12:49:03.785855  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.785868  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:03.785878  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:03.785905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:03.800987  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:03.801019  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:03.882870  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:03.882905  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:03.882924  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:03.967335  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:03.967382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:04.008319  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:04.008348  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:06.562156  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:06.579058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:06.579137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:06.635302  433881 cri.go:89] found id: ""
	I0408 12:49:06.635333  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.635345  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:06.635353  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:06.635422  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:06.696626  433881 cri.go:89] found id: ""
	I0408 12:49:06.696675  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.696692  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:06.696700  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:06.696769  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:06.738555  433881 cri.go:89] found id: ""
	I0408 12:49:06.738589  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.738601  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:06.738610  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:06.738675  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:06.780471  433881 cri.go:89] found id: ""
	I0408 12:49:06.780507  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.780516  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:06.780522  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:06.780587  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:06.823514  433881 cri.go:89] found id: ""
	I0408 12:49:06.823558  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.823571  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:06.823580  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:06.823671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:06.863990  433881 cri.go:89] found id: ""
	I0408 12:49:06.864029  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.864045  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:06.864055  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:06.864123  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:06.905383  433881 cri.go:89] found id: ""
	I0408 12:49:06.905419  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.905432  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:06.905440  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:06.905510  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:06.947761  433881 cri.go:89] found id: ""
	I0408 12:49:06.947792  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.947805  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:06.947814  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:06.947826  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:06.988895  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:06.988930  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:07.043205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:07.043251  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:07.057788  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:07.057823  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:07.137854  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:07.137884  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:07.137903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:09.724678  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:09.739337  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:09.739408  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:09.777803  433881 cri.go:89] found id: ""
	I0408 12:49:09.777837  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.777848  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:09.777857  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:09.777934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:09.818101  433881 cri.go:89] found id: ""
	I0408 12:49:09.818132  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.818144  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:09.818152  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:09.818220  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:09.860148  433881 cri.go:89] found id: ""
	I0408 12:49:09.860186  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.860211  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:09.860218  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:09.860284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:09.899008  433881 cri.go:89] found id: ""
	I0408 12:49:09.899042  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.899054  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:09.899063  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:09.899130  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:09.938235  433881 cri.go:89] found id: ""
	I0408 12:49:09.938270  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.938281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:09.938290  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:09.938361  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:09.977404  433881 cri.go:89] found id: ""
	I0408 12:49:09.977438  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.977447  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:09.977454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:09.977505  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:10.015959  433881 cri.go:89] found id: ""
	I0408 12:49:10.015992  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.016008  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:10.016015  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:10.016083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:10.055723  433881 cri.go:89] found id: ""
	I0408 12:49:10.055753  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.055762  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:10.055771  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:10.055785  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:10.131028  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:10.131061  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:10.131079  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:10.213484  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:10.213528  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:10.261403  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:10.261554  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:10.316130  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:10.316189  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:12.832344  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:12.846324  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:12.846446  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:12.883721  433881 cri.go:89] found id: ""
	I0408 12:49:12.883761  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.883776  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:12.883784  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:12.883850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:12.922869  433881 cri.go:89] found id: ""
	I0408 12:49:12.922903  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.922914  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:12.922923  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:12.922989  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:12.965672  433881 cri.go:89] found id: ""
	I0408 12:49:12.965711  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.965723  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:12.965731  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:12.965804  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:13.005430  433881 cri.go:89] found id: ""
	I0408 12:49:13.005466  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.005479  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:13.005494  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:13.005556  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:13.047068  433881 cri.go:89] found id: ""
	I0408 12:49:13.047095  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.047103  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:13.047110  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:13.047175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:13.085014  433881 cri.go:89] found id: ""
	I0408 12:49:13.085047  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.085058  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:13.085067  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:13.085134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:13.122582  433881 cri.go:89] found id: ""
	I0408 12:49:13.122621  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.122633  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:13.122643  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:13.122707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:13.159159  433881 cri.go:89] found id: ""
	I0408 12:49:13.159190  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.159199  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:13.159209  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:13.159221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:13.211508  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:13.211553  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:13.228228  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:13.228265  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:13.306379  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:13.306419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:13.306437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:13.383403  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:13.383462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:15.933673  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:15.947963  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:15.948039  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:15.988497  433881 cri.go:89] found id: ""
	I0408 12:49:15.988526  433881 logs.go:276] 0 containers: []
	W0408 12:49:15.988534  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:15.988541  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:15.988605  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:16.026695  433881 cri.go:89] found id: ""
	I0408 12:49:16.026733  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.026758  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:16.026766  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:16.026850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:16.072415  433881 cri.go:89] found id: ""
	I0408 12:49:16.072452  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.072487  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:16.072498  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:16.072576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:16.111534  433881 cri.go:89] found id: ""
	I0408 12:49:16.111563  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.111575  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:16.111583  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:16.111653  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:16.151515  433881 cri.go:89] found id: ""
	I0408 12:49:16.151550  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.151562  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:16.151572  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:16.151640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:16.189055  433881 cri.go:89] found id: ""
	I0408 12:49:16.189085  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.189094  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:16.189101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:16.189153  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:16.226759  433881 cri.go:89] found id: ""
	I0408 12:49:16.226790  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.226800  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:16.226807  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:16.226860  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:16.269035  433881 cri.go:89] found id: ""
	I0408 12:49:16.269068  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.269079  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:16.269092  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:16.269110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:16.322426  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:16.322472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:16.337670  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:16.337704  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:16.422746  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:16.422777  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:16.422795  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:16.508089  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:16.508140  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:19.055162  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:19.069970  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:19.070044  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:19.110031  433881 cri.go:89] found id: ""
	I0408 12:49:19.110062  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.110070  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:19.110077  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:19.110125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:19.150644  433881 cri.go:89] found id: ""
	I0408 12:49:19.150681  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.150693  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:19.150702  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:19.150770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:19.193032  433881 cri.go:89] found id: ""
	I0408 12:49:19.193064  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.193076  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:19.193084  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:19.193157  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:19.230634  433881 cri.go:89] found id: ""
	I0408 12:49:19.230661  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.230670  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:19.230676  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:19.230727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:19.269083  433881 cri.go:89] found id: ""
	I0408 12:49:19.269116  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.269125  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:19.269132  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:19.269183  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:19.309072  433881 cri.go:89] found id: ""
	I0408 12:49:19.309105  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.309117  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:19.309126  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:19.309208  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:19.349582  433881 cri.go:89] found id: ""
	I0408 12:49:19.349613  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.349622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:19.349633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:19.349687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:19.388015  433881 cri.go:89] found id: ""
	I0408 12:49:19.388046  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.388053  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:19.388062  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:19.388076  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:19.469726  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:19.469750  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:19.469766  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:19.551098  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:19.551138  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:19.595343  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:19.595377  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:19.655983  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:19.656031  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:22.172109  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:22.187123  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:22.187197  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:22.227242  433881 cri.go:89] found id: ""
	I0408 12:49:22.227269  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.227277  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:22.227283  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:22.227344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:22.266238  433881 cri.go:89] found id: ""
	I0408 12:49:22.266270  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.266279  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:22.266285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:22.266345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:22.304245  433881 cri.go:89] found id: ""
	I0408 12:49:22.304273  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.304281  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:22.304288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:22.304344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:22.348994  433881 cri.go:89] found id: ""
	I0408 12:49:22.349035  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.349048  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:22.349058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:22.349134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:22.389590  433881 cri.go:89] found id: ""
	I0408 12:49:22.389622  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.389631  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:22.389638  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:22.389708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:22.425775  433881 cri.go:89] found id: ""
	I0408 12:49:22.425809  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.425821  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:22.425830  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:22.425898  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:22.468155  433881 cri.go:89] found id: ""
	I0408 12:49:22.468184  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.468192  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:22.468198  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:22.468250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:22.507866  433881 cri.go:89] found id: ""
	I0408 12:49:22.507906  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.507915  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:22.507934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:22.507953  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:22.559847  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:22.559893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:22.575153  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:22.575188  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:22.656324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:22.656354  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:22.656372  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:22.737542  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:22.737589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.282655  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:25.296701  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:25.296770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:25.337101  433881 cri.go:89] found id: ""
	I0408 12:49:25.337141  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.337152  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:25.337161  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:25.337228  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:25.376383  433881 cri.go:89] found id: ""
	I0408 12:49:25.376453  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.376467  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:25.376481  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:25.376576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:25.415819  433881 cri.go:89] found id: ""
	I0408 12:49:25.415852  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.415865  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:25.415873  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:25.415941  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:25.457500  433881 cri.go:89] found id: ""
	I0408 12:49:25.457549  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.457560  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:25.457568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:25.457652  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:25.497132  433881 cri.go:89] found id: ""
	I0408 12:49:25.497172  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.497185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:25.497194  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:25.497265  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:25.542721  433881 cri.go:89] found id: ""
	I0408 12:49:25.542754  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.542765  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:25.542773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:25.542842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:25.583815  433881 cri.go:89] found id: ""
	I0408 12:49:25.583858  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.583869  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:25.583876  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:25.583931  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:25.623484  433881 cri.go:89] found id: ""
	I0408 12:49:25.623519  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.623530  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:25.623544  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:25.623562  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.674250  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:25.674286  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:25.735433  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:25.735477  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:25.750760  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:25.750792  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:25.830122  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:25.830158  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:25.830192  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:28.418059  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:28.434568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:28.434627  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.479914  433881 cri.go:89] found id: ""
	I0408 12:49:28.479956  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.479968  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:28.479977  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:28.480052  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:28.526249  433881 cri.go:89] found id: ""
	I0408 12:49:28.526282  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.526305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:28.526314  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:28.526403  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:28.564561  433881 cri.go:89] found id: ""
	I0408 12:49:28.564595  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.564606  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:28.564613  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:28.564666  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:28.606416  433881 cri.go:89] found id: ""
	I0408 12:49:28.606456  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.606469  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:28.606478  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:28.606545  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:28.649847  433881 cri.go:89] found id: ""
	I0408 12:49:28.649880  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.649915  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:28.649925  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:28.650014  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:28.690548  433881 cri.go:89] found id: ""
	I0408 12:49:28.690587  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.690600  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:28.690609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:28.690681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:28.730123  433881 cri.go:89] found id: ""
	I0408 12:49:28.730159  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.730170  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:28.730179  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:28.730250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:28.771147  433881 cri.go:89] found id: ""
	I0408 12:49:28.771192  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.771205  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:28.771220  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:28.771238  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:28.856250  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:28.856273  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:28.856301  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:28.941925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:28.941982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:29.003853  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:29.003893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:29.057957  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:29.058004  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.573734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:31.588485  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:31.588551  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:31.625072  433881 cri.go:89] found id: ""
	I0408 12:49:31.625100  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.625108  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:31.625114  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:31.625175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:31.662716  433881 cri.go:89] found id: ""
	I0408 12:49:31.662752  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.662763  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:31.662772  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:31.662839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:31.701551  433881 cri.go:89] found id: ""
	I0408 12:49:31.701588  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.701596  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:31.701603  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:31.701687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:31.741857  433881 cri.go:89] found id: ""
	I0408 12:49:31.741888  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.741900  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:31.741908  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:31.741973  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:31.782209  433881 cri.go:89] found id: ""
	I0408 12:49:31.782240  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.782252  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:31.782259  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:31.782347  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:31.820207  433881 cri.go:89] found id: ""
	I0408 12:49:31.820261  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.820283  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:31.820297  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:31.820362  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:31.858445  433881 cri.go:89] found id: ""
	I0408 12:49:31.858482  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.858495  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:31.858504  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:31.858580  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:31.899017  433881 cri.go:89] found id: ""
	I0408 12:49:31.899052  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.899070  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:31.899084  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:31.899102  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:31.956200  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:31.956239  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.971940  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:31.971982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:32.049548  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:32.049578  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:32.049596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:32.136320  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:32.136366  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:34.684997  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:34.700097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:34.700185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:34.757577  433881 cri.go:89] found id: ""
	I0408 12:49:34.757669  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.757686  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:34.757696  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:34.757792  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:34.798151  433881 cri.go:89] found id: ""
	I0408 12:49:34.798188  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.798196  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:34.798203  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:34.798266  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:34.835735  433881 cri.go:89] found id: ""
	I0408 12:49:34.835774  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.835786  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:34.835794  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:34.835862  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:34.875311  433881 cri.go:89] found id: ""
	I0408 12:49:34.875345  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.875359  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:34.875368  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:34.875484  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:34.916118  433881 cri.go:89] found id: ""
	I0408 12:49:34.916148  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.916159  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:34.916167  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:34.916233  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:34.961197  433881 cri.go:89] found id: ""
	I0408 12:49:34.961234  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.961246  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:34.961254  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:34.961314  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:34.999553  433881 cri.go:89] found id: ""
	I0408 12:49:34.999590  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.999598  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:34.999606  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:34.999671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:35.038204  433881 cri.go:89] found id: ""
	I0408 12:49:35.038244  433881 logs.go:276] 0 containers: []
	W0408 12:49:35.038254  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:35.038265  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:35.038277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:35.118925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:35.118982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:35.164584  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:35.164631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:35.216654  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:35.216694  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:35.232506  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:35.232544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:35.304615  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:37.805529  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:37.821463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:37.821550  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:37.860644  433881 cri.go:89] found id: ""
	I0408 12:49:37.860683  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.860700  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:37.860709  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:37.860781  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:37.899995  433881 cri.go:89] found id: ""
	I0408 12:49:37.900034  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.900042  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:37.900048  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:37.900111  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:37.939562  433881 cri.go:89] found id: ""
	I0408 12:49:37.939584  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.939592  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:37.939599  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:37.939668  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:37.977990  433881 cri.go:89] found id: ""
	I0408 12:49:37.978021  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.978033  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:37.978042  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:37.978113  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:38.014506  433881 cri.go:89] found id: ""
	I0408 12:49:38.014537  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.014551  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:38.014559  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:38.014639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:38.049714  433881 cri.go:89] found id: ""
	I0408 12:49:38.049751  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.049764  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:38.049773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:38.049842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:38.089931  433881 cri.go:89] found id: ""
	I0408 12:49:38.089978  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.089987  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:38.089993  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:38.090058  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:38.127674  433881 cri.go:89] found id: ""
	I0408 12:49:38.127715  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.127727  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:38.127738  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:38.127759  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.144170  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:38.144203  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:38.225864  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:38.225885  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:38.225899  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:38.309289  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:38.309334  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:38.351666  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:38.351724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:40.910064  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:40.926264  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:40.926350  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:40.973110  433881 cri.go:89] found id: ""
	I0408 12:49:40.973138  433881 logs.go:276] 0 containers: []
	W0408 12:49:40.973146  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:40.973152  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:40.973209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:41.014643  433881 cri.go:89] found id: ""
	I0408 12:49:41.014675  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.014688  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:41.014696  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:41.014761  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:41.054414  433881 cri.go:89] found id: ""
	I0408 12:49:41.054446  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.054461  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:41.054469  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:41.054543  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:41.094835  433881 cri.go:89] found id: ""
	I0408 12:49:41.094867  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.094876  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:41.094883  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:41.094943  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:41.153654  433881 cri.go:89] found id: ""
	I0408 12:49:41.153684  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.153693  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:41.153699  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:41.153751  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:41.196170  433881 cri.go:89] found id: ""
	I0408 12:49:41.196198  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.196209  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:41.196215  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:41.196277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:41.261374  433881 cri.go:89] found id: ""
	I0408 12:49:41.261412  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.261423  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:41.261432  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:41.261500  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:41.300491  433881 cri.go:89] found id: ""
	I0408 12:49:41.300523  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.300532  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:41.300546  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:41.300559  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:41.373813  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:41.373843  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:41.373860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:41.449773  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:41.449819  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:41.498826  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:41.498862  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:41.552736  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:41.552780  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:44.068653  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:44.083655  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:44.083756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:44.124068  433881 cri.go:89] found id: ""
	I0408 12:49:44.124101  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.124113  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:44.124122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:44.124193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:44.160898  433881 cri.go:89] found id: ""
	I0408 12:49:44.160936  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.160950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:44.160958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:44.161032  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:44.196503  433881 cri.go:89] found id: ""
	I0408 12:49:44.196532  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.196540  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:44.196547  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:44.196611  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:44.234604  433881 cri.go:89] found id: ""
	I0408 12:49:44.234644  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.234656  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:44.234664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:44.234720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:44.271243  433881 cri.go:89] found id: ""
	I0408 12:49:44.271283  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.271297  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:44.271306  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:44.271369  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:44.308504  433881 cri.go:89] found id: ""
	I0408 12:49:44.308543  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.308571  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:44.308581  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:44.308644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:44.345662  433881 cri.go:89] found id: ""
	I0408 12:49:44.345703  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.345716  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:44.345725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:44.345786  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:44.384785  433881 cri.go:89] found id: ""
	I0408 12:49:44.384816  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.384826  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:44.384845  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:44.384863  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:44.429253  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:44.429283  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:44.485160  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:44.485201  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:44.502996  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:44.503033  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:44.581921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:44.581946  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:44.581964  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:47.167101  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:47.183406  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:47.183475  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:47.244266  433881 cri.go:89] found id: ""
	I0408 12:49:47.244295  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.244306  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:47.244314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:47.244379  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:47.285538  433881 cri.go:89] found id: ""
	I0408 12:49:47.285575  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.285588  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:47.285597  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:47.285673  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:47.323634  433881 cri.go:89] found id: ""
	I0408 12:49:47.323670  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.323679  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:47.323707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:47.323791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:47.362737  433881 cri.go:89] found id: ""
	I0408 12:49:47.362774  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.362787  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:47.362795  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:47.362856  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:47.403914  433881 cri.go:89] found id: ""
	I0408 12:49:47.403947  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.403958  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:47.403967  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:47.404035  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:47.445470  433881 cri.go:89] found id: ""
	I0408 12:49:47.445506  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.445521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:47.445530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:47.445598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:47.482633  433881 cri.go:89] found id: ""
	I0408 12:49:47.482669  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.482680  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:47.482689  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:47.482760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:47.521404  433881 cri.go:89] found id: ""
	I0408 12:49:47.521441  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.521456  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:47.521469  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:47.521486  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:47.597247  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:47.597270  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:47.597284  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:47.678765  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:47.678805  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.721463  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:47.721502  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:47.780430  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:47.780472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.295320  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:50.312212  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:50.312293  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:50.355987  433881 cri.go:89] found id: ""
	I0408 12:49:50.356022  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.356034  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:50.356043  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:50.356118  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:50.399662  433881 cri.go:89] found id: ""
	I0408 12:49:50.399714  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.399726  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:50.399735  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:50.399798  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:50.441718  433881 cri.go:89] found id: ""
	I0408 12:49:50.441753  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.441764  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:50.441773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:50.441846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:50.485588  433881 cri.go:89] found id: ""
	I0408 12:49:50.485624  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.485634  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:50.485641  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:50.485703  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:50.524897  433881 cri.go:89] found id: ""
	I0408 12:49:50.524929  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.524937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:50.524943  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:50.524998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:50.561337  433881 cri.go:89] found id: ""
	I0408 12:49:50.561378  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.561388  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:50.561396  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:50.561468  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:50.603052  433881 cri.go:89] found id: ""
	I0408 12:49:50.603082  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.603092  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:50.603101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:50.603169  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:50.643514  433881 cri.go:89] found id: ""
	I0408 12:49:50.643555  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.643566  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:50.643576  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:50.643589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:50.697346  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:50.697388  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.711982  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:50.712015  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:50.796665  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:50.796711  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:50.796731  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:50.873396  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:50.873438  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:53.421458  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:53.435909  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:53.435975  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:53.478018  433881 cri.go:89] found id: ""
	I0408 12:49:53.478052  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.478063  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:53.478072  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:53.478138  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:53.518890  433881 cri.go:89] found id: ""
	I0408 12:49:53.518936  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.518950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:53.518958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:53.519047  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:53.554912  433881 cri.go:89] found id: ""
	I0408 12:49:53.554952  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.554964  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:53.554972  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:53.555042  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:53.592991  433881 cri.go:89] found id: ""
	I0408 12:49:53.593019  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.593028  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:53.593033  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:53.593088  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:53.631215  433881 cri.go:89] found id: ""
	I0408 12:49:53.631255  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.631269  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:53.631277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:53.631351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:53.669189  433881 cri.go:89] found id: ""
	I0408 12:49:53.669228  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.669248  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:53.669258  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:53.669322  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:53.709315  433881 cri.go:89] found id: ""
	I0408 12:49:53.709344  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.709353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:53.709359  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:53.709421  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:53.750869  433881 cri.go:89] found id: ""
	I0408 12:49:53.750910  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.750922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:53.750934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:53.750951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:53.802734  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:53.802782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:53.819509  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:53.819546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:53.888733  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:53.888761  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:53.888782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:53.972408  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:53.972448  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:56.517173  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:56.532357  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:56.532427  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:56.574068  433881 cri.go:89] found id: ""
	I0408 12:49:56.574109  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.574118  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:56.574129  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:56.574276  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:56.616853  433881 cri.go:89] found id: ""
	I0408 12:49:56.616885  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.616906  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:56.616915  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:56.616988  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:56.659097  433881 cri.go:89] found id: ""
	I0408 12:49:56.659125  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.659133  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:56.659139  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:56.659190  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:56.699222  433881 cri.go:89] found id: ""
	I0408 12:49:56.699262  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.699274  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:56.699283  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:56.699345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:56.747017  433881 cri.go:89] found id: ""
	I0408 12:49:56.747055  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.747068  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:56.747076  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:56.747149  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:56.784988  433881 cri.go:89] found id: ""
	I0408 12:49:56.785028  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.785042  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:56.785058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:56.785126  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:56.830280  433881 cri.go:89] found id: ""
	I0408 12:49:56.830320  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.830332  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:56.830340  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:56.830410  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:56.868643  433881 cri.go:89] found id: ""
	I0408 12:49:56.868678  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.868686  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:56.868697  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:56.868713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:56.922497  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:56.922542  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:56.940550  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:56.940596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:57.018640  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:57.018665  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:57.018680  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.096626  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:57.096681  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:59.638585  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:59.652384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:59.652466  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:59.692778  433881 cri.go:89] found id: ""
	I0408 12:49:59.692823  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.692837  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:59.692846  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:59.692906  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:59.732896  433881 cri.go:89] found id: ""
	I0408 12:49:59.732923  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.732933  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:59.732940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:59.732999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:59.774774  433881 cri.go:89] found id: ""
	I0408 12:49:59.774806  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.774814  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:59.774819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:59.774870  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:59.812919  433881 cri.go:89] found id: ""
	I0408 12:49:59.812959  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.812972  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:59.812980  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:59.813043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:59.848653  433881 cri.go:89] found id: ""
	I0408 12:49:59.848684  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.848695  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:59.848703  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:59.848772  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:59.883495  433881 cri.go:89] found id: ""
	I0408 12:49:59.883525  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.883537  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:59.883546  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:59.883625  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:59.925080  433881 cri.go:89] found id: ""
	I0408 12:49:59.925113  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.925122  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:59.925129  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:59.925182  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:59.967101  433881 cri.go:89] found id: ""
	I0408 12:49:59.967130  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.967142  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:59.967152  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:59.967163  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:00.010507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:00.010546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:00.063139  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:00.063182  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:00.079229  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:00.079266  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:00.155202  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:00.155235  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:00.155253  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:02.738934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:02.752509  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:02.752593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:02.791178  433881 cri.go:89] found id: ""
	I0408 12:50:02.791212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.791222  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:02.791229  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:02.791301  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:02.834180  433881 cri.go:89] found id: ""
	I0408 12:50:02.834212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.834225  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:02.834234  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:02.834296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:02.873513  433881 cri.go:89] found id: ""
	I0408 12:50:02.873551  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.873563  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:02.873573  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:02.873651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:02.921329  433881 cri.go:89] found id: ""
	I0408 12:50:02.921371  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.921384  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:02.921392  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:02.921517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:02.959940  433881 cri.go:89] found id: ""
	I0408 12:50:02.959970  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.959980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:02.959988  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:02.960120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:03.001222  433881 cri.go:89] found id: ""
	I0408 12:50:03.001251  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.001259  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:03.001265  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:03.001317  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:03.043627  433881 cri.go:89] found id: ""
	I0408 12:50:03.043656  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.043666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:03.043671  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:03.043750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:03.083603  433881 cri.go:89] found id: ""
	I0408 12:50:03.083640  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.083649  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:03.083660  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:03.083674  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:03.138300  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:03.138343  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:03.153439  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:03.153476  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:03.230230  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:03.230258  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:03.230277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:03.312005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:03.312048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:05.851000  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:05.865533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:05.865601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:05.905449  433881 cri.go:89] found id: ""
	I0408 12:50:05.905485  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.905495  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:05.905501  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:05.905570  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:05.952260  433881 cri.go:89] found id: ""
	I0408 12:50:05.952293  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.952305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:05.952313  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:05.952384  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:05.993398  433881 cri.go:89] found id: ""
	I0408 12:50:05.993430  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.993440  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:05.993446  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:05.993512  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:06.031484  433881 cri.go:89] found id: ""
	I0408 12:50:06.031527  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.031539  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:06.031551  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:06.031613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:06.067855  433881 cri.go:89] found id: ""
	I0408 12:50:06.067897  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.067910  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:06.067920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:06.067992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:06.108905  433881 cri.go:89] found id: ""
	I0408 12:50:06.108937  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.108949  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:06.108958  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:06.109010  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:06.147629  433881 cri.go:89] found id: ""
	I0408 12:50:06.147664  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.147674  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:06.147683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:06.147760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:06.184250  433881 cri.go:89] found id: ""
	I0408 12:50:06.184287  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.184298  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:06.184312  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:06.184329  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:06.239560  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:06.239606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:06.254746  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:06.254777  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:06.330423  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:06.330453  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:06.330471  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:06.410965  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:06.411017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:08.958108  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:08.972557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:08.972626  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:09.026034  433881 cri.go:89] found id: ""
	I0408 12:50:09.026073  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.026081  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:09.026094  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:09.026145  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:09.063360  433881 cri.go:89] found id: ""
	I0408 12:50:09.063399  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.063411  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:09.063420  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:09.063509  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:09.101002  433881 cri.go:89] found id: ""
	I0408 12:50:09.101030  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.101039  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:09.101045  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:09.101104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:09.140794  433881 cri.go:89] found id: ""
	I0408 12:50:09.140830  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.140843  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:09.140852  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:09.140912  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:09.176889  433881 cri.go:89] found id: ""
	I0408 12:50:09.176927  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.176939  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:09.176947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:09.177013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:09.218687  433881 cri.go:89] found id: ""
	I0408 12:50:09.218719  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.218730  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:09.218739  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:09.218819  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:09.254509  433881 cri.go:89] found id: ""
	I0408 12:50:09.254542  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.254551  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:09.254557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:09.254619  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:09.291313  433881 cri.go:89] found id: ""
	I0408 12:50:09.291341  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.291349  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:09.291359  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:09.291382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:09.342578  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:09.342625  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:09.359207  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:09.359236  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:09.434921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:09.434945  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:09.434962  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:09.526672  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:09.526726  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:12.075428  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:12.089920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:12.089986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:12.128791  433881 cri.go:89] found id: ""
	I0408 12:50:12.128878  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.128895  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:12.128905  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:12.128979  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:12.166911  433881 cri.go:89] found id: ""
	I0408 12:50:12.166939  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.166947  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:12.166954  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:12.167005  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:12.205798  433881 cri.go:89] found id: ""
	I0408 12:50:12.205830  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.205839  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:12.205847  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:12.205905  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:12.242716  433881 cri.go:89] found id: ""
	I0408 12:50:12.242754  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.242764  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:12.242771  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:12.242825  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:12.279061  433881 cri.go:89] found id: ""
	I0408 12:50:12.279098  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.279109  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:12.279118  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:12.279187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:12.319510  433881 cri.go:89] found id: ""
	I0408 12:50:12.319538  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.319547  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:12.319554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:12.319610  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:12.357578  433881 cri.go:89] found id: ""
	I0408 12:50:12.357613  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.357625  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:12.357634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:12.357699  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:12.402895  433881 cri.go:89] found id: ""
	I0408 12:50:12.402931  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.402944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:12.402958  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:12.402975  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:12.455885  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:12.455929  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:12.472119  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:12.472160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:12.551019  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:12.551041  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:12.551054  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:12.633560  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:12.633606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.176459  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:15.191013  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:15.191083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:15.243825  433881 cri.go:89] found id: ""
	I0408 12:50:15.243852  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.243861  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:15.243867  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:15.243918  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:15.282768  433881 cri.go:89] found id: ""
	I0408 12:50:15.282803  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.282816  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:15.282824  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:15.282893  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:15.318418  433881 cri.go:89] found id: ""
	I0408 12:50:15.318447  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.318455  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:15.318463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:15.318540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:15.354071  433881 cri.go:89] found id: ""
	I0408 12:50:15.354109  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.354125  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:15.354133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:15.354205  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:15.397142  433881 cri.go:89] found id: ""
	I0408 12:50:15.397176  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.397185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:15.397191  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:15.397253  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:15.436798  433881 cri.go:89] found id: ""
	I0408 12:50:15.436832  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.436843  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:15.436851  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:15.436916  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:15.475792  433881 cri.go:89] found id: ""
	I0408 12:50:15.475823  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.475836  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:15.475844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:15.475917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:15.526277  433881 cri.go:89] found id: ""
	I0408 12:50:15.526323  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.526335  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:15.526348  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:15.526365  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:15.601590  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:15.601616  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:15.601631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:15.681784  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:15.681842  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.725300  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:15.725345  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:15.778579  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:15.778627  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:18.296690  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:18.310554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:18.310623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:18.350635  433881 cri.go:89] found id: ""
	I0408 12:50:18.350673  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.350685  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:18.350693  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:18.350756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:18.391943  433881 cri.go:89] found id: ""
	I0408 12:50:18.391974  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.391984  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:18.391990  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:18.392059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:18.433191  433881 cri.go:89] found id: ""
	I0408 12:50:18.433226  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.433237  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:18.433246  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:18.433310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:18.471600  433881 cri.go:89] found id: ""
	I0408 12:50:18.471629  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.471641  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:18.471649  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:18.471737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:18.507180  433881 cri.go:89] found id: ""
	I0408 12:50:18.507219  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.507228  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:18.507242  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:18.507307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:18.553894  433881 cri.go:89] found id: ""
	I0408 12:50:18.553924  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.553939  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:18.553947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:18.554013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:18.593823  433881 cri.go:89] found id: ""
	I0408 12:50:18.593860  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.593870  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:18.593878  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:18.593934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:18.636636  433881 cri.go:89] found id: ""
	I0408 12:50:18.636667  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.636679  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:18.636692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:18.636709  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:18.690597  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:18.690640  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:18.706484  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:18.706537  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:18.795390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:18.795419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:18.795434  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:18.873458  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:18.873518  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:21.420942  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:21.436200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:21.436262  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:21.473194  433881 cri.go:89] found id: ""
	I0408 12:50:21.473228  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.473237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:21.473244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:21.473297  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:21.510496  433881 cri.go:89] found id: ""
	I0408 12:50:21.510534  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.510547  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:21.510556  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:21.510618  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:21.550290  433881 cri.go:89] found id: ""
	I0408 12:50:21.550329  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.550337  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:21.550344  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:21.550399  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:21.586192  433881 cri.go:89] found id: ""
	I0408 12:50:21.586229  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.586241  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:21.586252  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:21.586316  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:21.645888  433881 cri.go:89] found id: ""
	I0408 12:50:21.645925  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.645937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:21.645945  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:21.646012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:21.710384  433881 cri.go:89] found id: ""
	I0408 12:50:21.710416  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.710429  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:21.710437  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:21.710503  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:21.773423  433881 cri.go:89] found id: ""
	I0408 12:50:21.773458  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.773467  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:21.773473  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:21.773536  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:21.814353  433881 cri.go:89] found id: ""
	I0408 12:50:21.814389  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.814401  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:21.814415  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:21.814437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:21.866744  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:21.866783  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:21.883577  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:21.883617  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:21.963339  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:21.963362  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:21.963379  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:22.044959  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:22.045017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:24.589027  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:24.603707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:24.603797  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:24.648525  433881 cri.go:89] found id: ""
	I0408 12:50:24.648566  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.648579  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:24.648587  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:24.648656  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:24.693788  433881 cri.go:89] found id: ""
	I0408 12:50:24.693827  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.693840  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:24.693850  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:24.693925  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:24.734461  433881 cri.go:89] found id: ""
	I0408 12:50:24.734499  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.734507  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:24.734514  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:24.734578  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:24.781723  433881 cri.go:89] found id: ""
	I0408 12:50:24.781759  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.781772  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:24.781780  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:24.781850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:24.823060  433881 cri.go:89] found id: ""
	I0408 12:50:24.823091  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.823101  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:24.823109  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:24.823195  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:24.858847  433881 cri.go:89] found id: ""
	I0408 12:50:24.858887  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.858899  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:24.858913  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:24.858968  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:24.899075  433881 cri.go:89] found id: ""
	I0408 12:50:24.899113  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.899125  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:24.899133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:24.899216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:24.941839  433881 cri.go:89] found id: ""
	I0408 12:50:24.941876  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.941886  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:24.941897  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:24.941911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:24.993358  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:24.993402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:25.010857  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:25.010892  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:25.098985  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:25.099017  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:25.099035  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:25.179115  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:25.179172  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:27.726080  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:27.740646  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:27.740739  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:27.781567  433881 cri.go:89] found id: ""
	I0408 12:50:27.781612  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.781623  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:27.781630  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:27.781696  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:27.823034  433881 cri.go:89] found id: ""
	I0408 12:50:27.823077  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.823090  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:27.823099  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:27.823174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:27.862738  433881 cri.go:89] found id: ""
	I0408 12:50:27.862797  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.862822  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:27.862832  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:27.862917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:27.905821  433881 cri.go:89] found id: ""
	I0408 12:50:27.905862  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.905874  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:27.905884  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:27.905954  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:27.949580  433881 cri.go:89] found id: ""
	I0408 12:50:27.949613  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.949625  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:27.949634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:27.949721  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:27.989453  433881 cri.go:89] found id: ""
	I0408 12:50:27.989488  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.989496  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:27.989502  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:27.989560  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:28.031983  433881 cri.go:89] found id: ""
	I0408 12:50:28.032015  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.032027  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:28.032035  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:28.032114  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:28.072851  433881 cri.go:89] found id: ""
	I0408 12:50:28.072884  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.072895  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:28.072910  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:28.072927  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:28.116117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:28.116160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:28.170098  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:28.170142  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:28.184820  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:28.184860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:28.261324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:28.261355  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:28.261384  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:30.837906  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:30.853871  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:30.853969  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:30.896197  433881 cri.go:89] found id: ""
	I0408 12:50:30.896228  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.896237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:30.896244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:30.896296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:30.938689  433881 cri.go:89] found id: ""
	I0408 12:50:30.938726  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.938740  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:30.938758  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:30.938840  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:30.980883  433881 cri.go:89] found id: ""
	I0408 12:50:30.980918  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.980929  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:30.980937  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:30.981008  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:31.018262  433881 cri.go:89] found id: ""
	I0408 12:50:31.018291  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.018305  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:31.018314  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:31.018382  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:31.055397  433881 cri.go:89] found id: ""
	I0408 12:50:31.055430  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.055443  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:31.055452  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:31.055527  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:31.091476  433881 cri.go:89] found id: ""
	I0408 12:50:31.091511  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.091523  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:31.091531  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:31.091583  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:31.130285  433881 cri.go:89] found id: ""
	I0408 12:50:31.130326  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.130337  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:31.130345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:31.130419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:31.168196  433881 cri.go:89] found id: ""
	I0408 12:50:31.168227  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.168236  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:31.168246  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:31.168258  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:31.220612  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:31.220652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:31.236718  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:31.236754  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:31.310550  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:31.310574  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:31.310588  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:31.387376  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:31.387420  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:33.932307  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:33.946664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:33.946754  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:33.991321  433881 cri.go:89] found id: ""
	I0408 12:50:33.991359  433881 logs.go:276] 0 containers: []
	W0408 12:50:33.991371  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:33.991381  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:33.991451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:34.033989  433881 cri.go:89] found id: ""
	I0408 12:50:34.034024  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.034034  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:34.034041  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:34.034125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:34.081140  433881 cri.go:89] found id: ""
	I0408 12:50:34.081183  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.081192  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:34.081199  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:34.081258  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:34.122332  433881 cri.go:89] found id: ""
	I0408 12:50:34.122365  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.122376  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:34.122384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:34.122451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:34.161307  433881 cri.go:89] found id: ""
	I0408 12:50:34.161353  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.161378  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:34.161387  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:34.161460  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:34.199664  433881 cri.go:89] found id: ""
	I0408 12:50:34.199715  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.199727  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:34.199736  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:34.199816  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:34.242044  433881 cri.go:89] found id: ""
	I0408 12:50:34.242077  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.242087  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:34.242094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:34.242159  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:34.277852  433881 cri.go:89] found id: ""
	I0408 12:50:34.277893  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.277908  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:34.277920  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:34.277940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:34.329572  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:34.329614  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:34.343823  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:34.343854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:34.422625  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:34.422652  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:34.422670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:34.504605  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:34.504653  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:37.050790  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:37.065111  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:37.065179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:37.108541  433881 cri.go:89] found id: ""
	I0408 12:50:37.108573  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.108583  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:37.108590  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:37.108655  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:37.145207  433881 cri.go:89] found id: ""
	I0408 12:50:37.145241  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.145256  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:37.145264  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:37.145332  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:37.182788  433881 cri.go:89] found id: ""
	I0408 12:50:37.182823  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.182836  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:37.182844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:37.182917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:37.222780  433881 cri.go:89] found id: ""
	I0408 12:50:37.222804  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.222813  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:37.222819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:37.222884  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:37.261653  433881 cri.go:89] found id: ""
	I0408 12:50:37.261703  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.261715  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:37.261725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:37.261795  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:37.300613  433881 cri.go:89] found id: ""
	I0408 12:50:37.300642  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.300651  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:37.300657  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:37.300720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:37.344252  433881 cri.go:89] found id: ""
	I0408 12:50:37.344289  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.344302  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:37.344311  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:37.344380  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:37.382644  433881 cri.go:89] found id: ""
	I0408 12:50:37.382682  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.382695  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:37.382708  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:37.382725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:37.437205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:37.437248  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:37.451772  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:37.451806  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:37.535578  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:37.535604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:37.535618  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:37.618315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:37.618358  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.160025  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:40.173704  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:40.173770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:40.212527  433881 cri.go:89] found id: ""
	I0408 12:50:40.212564  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.212576  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:40.212584  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:40.212648  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:40.250802  433881 cri.go:89] found id: ""
	I0408 12:50:40.250833  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.250841  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:40.250848  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:40.250910  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:40.292534  433881 cri.go:89] found id: ""
	I0408 12:50:40.292565  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.292576  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:40.292584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:40.292641  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:40.329973  433881 cri.go:89] found id: ""
	I0408 12:50:40.330004  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.330017  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:40.330027  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:40.330083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:40.367351  433881 cri.go:89] found id: ""
	I0408 12:50:40.367381  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.367390  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:40.367397  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:40.367462  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:40.404499  433881 cri.go:89] found id: ""
	I0408 12:50:40.404535  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.404546  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:40.404556  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:40.404624  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:40.448208  433881 cri.go:89] found id: ""
	I0408 12:50:40.448244  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.448254  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:40.448263  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:40.448318  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:40.490191  433881 cri.go:89] found id: ""
	I0408 12:50:40.490225  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.490235  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:40.490246  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:40.490262  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:40.507079  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:40.507119  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:40.584844  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:40.584880  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:40.584905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:40.665416  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:40.665461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.710289  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:40.710331  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:43.267848  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:43.283094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:43.283192  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:43.321609  433881 cri.go:89] found id: ""
	I0408 12:50:43.321643  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.321655  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:43.321664  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:43.321732  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:43.361550  433881 cri.go:89] found id: ""
	I0408 12:50:43.361587  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.361599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:43.361608  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:43.361686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:43.398332  433881 cri.go:89] found id: ""
	I0408 12:50:43.398373  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.398386  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:43.398394  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:43.398463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:43.436808  433881 cri.go:89] found id: ""
	I0408 12:50:43.436836  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.436844  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:43.436850  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:43.436901  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:43.475475  433881 cri.go:89] found id: ""
	I0408 12:50:43.475512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.475524  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:43.475533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:43.475600  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:43.515481  433881 cri.go:89] found id: ""
	I0408 12:50:43.515512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.515521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:43.515530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:43.515599  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:43.555358  433881 cri.go:89] found id: ""
	I0408 12:50:43.555388  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.555410  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:43.555420  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:43.555476  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:43.590192  433881 cri.go:89] found id: ""
	I0408 12:50:43.590240  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.590253  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:43.590265  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:43.590281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:43.643642  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:43.643699  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:43.659375  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:43.659405  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:43.739721  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:43.739743  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:43.739760  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:43.821107  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:43.821152  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:46.364937  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:46.378208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:46.378295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:46.415217  433881 cri.go:89] found id: ""
	I0408 12:50:46.415251  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.415263  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:46.415272  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:46.415336  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:46.453886  433881 cri.go:89] found id: ""
	I0408 12:50:46.453921  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.453930  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:46.453936  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:46.453992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:46.491443  433881 cri.go:89] found id: ""
	I0408 12:50:46.491475  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.491488  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:46.491496  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:46.491565  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:46.535815  433881 cri.go:89] found id: ""
	I0408 12:50:46.535845  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.535854  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:46.535860  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:46.535921  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:46.577704  433881 cri.go:89] found id: ""
	I0408 12:50:46.577814  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.577826  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:46.577835  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:46.577915  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:46.624693  433881 cri.go:89] found id: ""
	I0408 12:50:46.624723  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.624731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:46.624738  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:46.624791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:46.659410  433881 cri.go:89] found id: ""
	I0408 12:50:46.659462  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.659474  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:46.659482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:46.659547  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:46.694881  433881 cri.go:89] found id: ""
	I0408 12:50:46.694912  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.694926  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:46.694937  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:46.694954  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:46.751416  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:46.751464  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:46.767739  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:46.767779  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:46.854021  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:46.854050  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:46.854066  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.937214  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:46.937252  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.479829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:49.494083  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:49.494156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:49.532518  433881 cri.go:89] found id: ""
	I0408 12:50:49.532555  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.532563  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:49.532569  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:49.532632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:49.571054  433881 cri.go:89] found id: ""
	I0408 12:50:49.571086  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.571111  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:49.571119  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:49.571199  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:49.607025  433881 cri.go:89] found id: ""
	I0408 12:50:49.607061  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.607071  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:49.607080  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:49.607156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:49.646890  433881 cri.go:89] found id: ""
	I0408 12:50:49.646921  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.646930  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:49.646939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:49.647009  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:49.688671  433881 cri.go:89] found id: ""
	I0408 12:50:49.688707  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.688719  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:49.688728  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:49.688800  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:49.726687  433881 cri.go:89] found id: ""
	I0408 12:50:49.726724  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.726735  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:49.726741  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:49.726808  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:49.767311  433881 cri.go:89] found id: ""
	I0408 12:50:49.767344  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.767353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:49.767360  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:49.767414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:49.803409  433881 cri.go:89] found id: ""
	I0408 12:50:49.803442  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.803452  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:49.803463  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:49.803478  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.842738  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:49.842767  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:49.895264  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:49.895318  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:49.910300  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:49.910332  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:50.005994  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:50.006031  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:50.006048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:52.589266  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:52.603202  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:52.603308  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:52.640493  433881 cri.go:89] found id: ""
	I0408 12:50:52.640525  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.640540  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:52.640550  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:52.640613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:52.680230  433881 cri.go:89] found id: ""
	I0408 12:50:52.680271  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.680284  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:52.680293  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:52.680355  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:52.724048  433881 cri.go:89] found id: ""
	I0408 12:50:52.724084  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.724096  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:52.724104  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:52.724171  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:52.776926  433881 cri.go:89] found id: ""
	I0408 12:50:52.776960  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.776973  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:52.776982  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:52.777059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:52.814738  433881 cri.go:89] found id: ""
	I0408 12:50:52.814770  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.814781  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:52.814788  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:52.814842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:52.854463  433881 cri.go:89] found id: ""
	I0408 12:50:52.854501  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.854511  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:52.854521  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:52.854591  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:52.896180  433881 cri.go:89] found id: ""
	I0408 12:50:52.896209  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.896218  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:52.896224  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:52.896279  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:52.931890  433881 cri.go:89] found id: ""
	I0408 12:50:52.931932  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.931944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:52.931956  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:52.931973  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:53.013345  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:53.013368  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:53.013385  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:53.092792  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:53.092834  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:53.142678  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:53.142713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:53.196378  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:53.196429  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:55.713265  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:55.729253  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:55.729341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:55.772259  433881 cri.go:89] found id: ""
	I0408 12:50:55.772303  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.772317  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:55.772325  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:55.772398  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:55.816146  433881 cri.go:89] found id: ""
	I0408 12:50:55.816178  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.816188  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:55.816194  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:55.816247  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:55.857896  433881 cri.go:89] found id: ""
	I0408 12:50:55.857935  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.857947  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:55.857955  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:55.858025  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:55.896337  433881 cri.go:89] found id: ""
	I0408 12:50:55.896374  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.896386  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:55.896395  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:55.896463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:55.936373  433881 cri.go:89] found id: ""
	I0408 12:50:55.936419  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.936430  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:55.936439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:55.936514  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:55.996751  433881 cri.go:89] found id: ""
	I0408 12:50:55.996782  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.996793  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:55.996802  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:55.996866  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:56.038910  433881 cri.go:89] found id: ""
	I0408 12:50:56.038948  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.038956  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:56.038962  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:56.039018  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:56.078147  433881 cri.go:89] found id: ""
	I0408 12:50:56.078185  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.078195  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:56.078206  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:56.078223  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:56.137679  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:56.137725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:56.153067  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:56.153101  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:56.242398  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:56.242422  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:56.242436  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:56.325353  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:56.325402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:58.867789  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:58.881570  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:58.881640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:58.918941  433881 cri.go:89] found id: ""
	I0408 12:50:58.918971  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.918980  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:58.918987  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:58.919041  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:58.956339  433881 cri.go:89] found id: ""
	I0408 12:50:58.956375  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.956387  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:58.956395  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:58.956448  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:58.998045  433881 cri.go:89] found id: ""
	I0408 12:50:58.998075  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.998087  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:58.998113  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:58.998186  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:59.037694  433881 cri.go:89] found id: ""
	I0408 12:50:59.037724  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.037736  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:59.037744  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:59.037813  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:59.079404  433881 cri.go:89] found id: ""
	I0408 12:50:59.079436  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.079448  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:59.079458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:59.079525  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:59.117535  433881 cri.go:89] found id: ""
	I0408 12:50:59.117566  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.117585  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:59.117593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:59.117661  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:59.163144  433881 cri.go:89] found id: ""
	I0408 12:50:59.163177  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.163190  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:59.163200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:59.163295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:59.201778  433881 cri.go:89] found id: ""
	I0408 12:50:59.201815  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.201827  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:59.201840  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:59.201857  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:59.256688  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:59.256730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:59.272631  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:59.272670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:59.345194  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:59.345219  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:59.345233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:59.420807  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:59.420873  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:01.966779  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:01.992790  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:01.992868  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:02.032532  433881 cri.go:89] found id: ""
	I0408 12:51:02.032580  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.032592  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:02.032603  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:02.032684  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:02.070377  433881 cri.go:89] found id: ""
	I0408 12:51:02.070405  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.070412  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:02.070418  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:02.070481  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:02.109543  433881 cri.go:89] found id: ""
	I0408 12:51:02.109569  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.109577  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:02.109584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:02.109639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:02.148009  433881 cri.go:89] found id: ""
	I0408 12:51:02.148049  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.148062  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:02.148078  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:02.148144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:02.184318  433881 cri.go:89] found id: ""
	I0408 12:51:02.184351  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.184362  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:02.184371  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:02.184469  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:02.225491  433881 cri.go:89] found id: ""
	I0408 12:51:02.225534  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.225545  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:02.225554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:02.225628  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:02.269401  433881 cri.go:89] found id: ""
	I0408 12:51:02.269439  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.269447  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:02.269454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:02.269517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:02.310153  433881 cri.go:89] found id: ""
	I0408 12:51:02.310189  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.310197  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:02.310209  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:02.310224  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:02.326077  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:02.326111  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:02.402369  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:02.402394  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:02.402410  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:02.483819  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:02.483866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:02.527581  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:02.527628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:05.083167  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:05.097986  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:05.098063  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:05.139396  433881 cri.go:89] found id: ""
	I0408 12:51:05.139434  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.139446  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:05.139464  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:05.139568  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:05.176882  433881 cri.go:89] found id: ""
	I0408 12:51:05.176918  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.176931  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:05.176940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:05.177012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:05.216426  433881 cri.go:89] found id: ""
	I0408 12:51:05.216459  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.216478  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:05.216486  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:05.216598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:05.254724  433881 cri.go:89] found id: ""
	I0408 12:51:05.254754  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.254762  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:05.254768  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:05.254821  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:05.291361  433881 cri.go:89] found id: ""
	I0408 12:51:05.291388  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.291397  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:05.291403  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:05.291453  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:05.329102  433881 cri.go:89] found id: ""
	I0408 12:51:05.329134  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.329145  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:05.329152  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:05.329216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:05.368614  433881 cri.go:89] found id: ""
	I0408 12:51:05.368657  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.368666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:05.368674  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:05.368727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:05.412151  433881 cri.go:89] found id: ""
	I0408 12:51:05.412182  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.412196  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:05.412208  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:05.412227  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:05.428329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:05.428364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:05.509452  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:05.509481  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:05.509500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:05.586831  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:05.586882  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:05.636175  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:05.636213  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:08.189786  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:08.205609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:08.205686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:08.256556  433881 cri.go:89] found id: ""
	I0408 12:51:08.256586  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.256597  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:08.256607  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:08.256664  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:08.309126  433881 cri.go:89] found id: ""
	I0408 12:51:08.309163  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.309176  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:08.309184  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:08.309259  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:08.350669  433881 cri.go:89] found id: ""
	I0408 12:51:08.350699  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.350708  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:08.350716  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:08.350766  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:08.392122  433881 cri.go:89] found id: ""
	I0408 12:51:08.392156  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.392164  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:08.392171  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:08.392223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:08.435571  433881 cri.go:89] found id: ""
	I0408 12:51:08.435603  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.435616  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:08.435624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:08.435708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.474285  433881 cri.go:89] found id: ""
	I0408 12:51:08.474322  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.474334  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:08.474345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:08.474419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:08.521060  433881 cri.go:89] found id: ""
	I0408 12:51:08.521101  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.521109  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:08.521116  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:08.521185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:08.559967  433881 cri.go:89] found id: ""
	I0408 12:51:08.560013  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.560026  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:08.560051  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:08.560068  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:08.614926  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:08.614966  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:08.639012  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:08.639059  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:08.755572  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:08.755604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:08.755621  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:08.836005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:08.836050  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:11.383048  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:11.397692  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:11.397763  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:11.439445  433881 cri.go:89] found id: ""
	I0408 12:51:11.439482  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.439494  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:11.439503  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:11.439558  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:11.478262  433881 cri.go:89] found id: ""
	I0408 12:51:11.478297  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.478309  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:11.478318  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:11.478392  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:11.518012  433881 cri.go:89] found id: ""
	I0408 12:51:11.518049  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.518063  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:11.518071  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:11.518137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:11.557519  433881 cri.go:89] found id: ""
	I0408 12:51:11.557551  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.557563  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:11.557571  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:11.557644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:11.595494  433881 cri.go:89] found id: ""
	I0408 12:51:11.595528  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.595541  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:11.595550  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:11.595622  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:11.635667  433881 cri.go:89] found id: ""
	I0408 12:51:11.635719  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.635731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:11.635740  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:11.635806  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:11.675521  433881 cri.go:89] found id: ""
	I0408 12:51:11.675553  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.675562  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:11.675568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:11.675623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:11.720983  433881 cri.go:89] found id: ""
	I0408 12:51:11.721016  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.721029  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:11.721041  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:11.721055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:11.775418  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:11.775462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:11.790019  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:11.790061  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:11.867479  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:11.867512  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:11.867530  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:11.944546  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:11.944594  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:14.487829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:14.501277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:14.501356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:14.539996  433881 cri.go:89] found id: ""
	I0408 12:51:14.540031  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.540043  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:14.540054  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:14.540125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:14.580611  433881 cri.go:89] found id: ""
	I0408 12:51:14.580646  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.580658  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:14.580667  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:14.580729  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:14.623459  433881 cri.go:89] found id: ""
	I0408 12:51:14.623497  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.623509  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:14.623518  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:14.623593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:14.666904  433881 cri.go:89] found id: ""
	I0408 12:51:14.666944  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.666953  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:14.666959  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:14.667012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:14.709136  433881 cri.go:89] found id: ""
	I0408 12:51:14.709169  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.709178  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:14.709183  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:14.709234  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:14.757342  433881 cri.go:89] found id: ""
	I0408 12:51:14.757377  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.757390  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:14.757402  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:14.757477  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:14.795210  433881 cri.go:89] found id: ""
	I0408 12:51:14.795249  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.795262  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:14.795271  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:14.795329  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:14.833782  433881 cri.go:89] found id: ""
	I0408 12:51:14.833813  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.833821  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:14.833831  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:14.833843  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:14.892985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:14.893030  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:14.909567  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:14.909615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:14.988447  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:14.988473  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:14.988494  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:15.068404  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:15.068446  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:17.617145  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:17.630439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:17.630520  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:17.672814  433881 cri.go:89] found id: ""
	I0408 12:51:17.672845  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.672853  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:17.672860  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:17.672936  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:17.715344  433881 cri.go:89] found id: ""
	I0408 12:51:17.715378  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.715391  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:17.715399  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:17.715464  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:17.757246  433881 cri.go:89] found id: ""
	I0408 12:51:17.757283  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.757295  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:17.757304  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:17.757373  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:17.798201  433881 cri.go:89] found id: ""
	I0408 12:51:17.798236  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.798245  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:17.798250  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:17.798312  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:17.838243  433881 cri.go:89] found id: ""
	I0408 12:51:17.838280  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.838296  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:17.838305  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:17.838376  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:17.877394  433881 cri.go:89] found id: ""
	I0408 12:51:17.877433  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.877446  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:17.877455  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:17.877522  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:17.917513  433881 cri.go:89] found id: ""
	I0408 12:51:17.917546  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.917557  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:17.917564  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:17.917631  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:17.959806  433881 cri.go:89] found id: ""
	I0408 12:51:17.959841  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.959854  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:17.959872  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:17.959888  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:17.974835  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:17.974866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:18.051066  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:18.051096  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:18.051110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:18.130246  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:18.130294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:18.177977  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:18.178009  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:20.732943  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:20.747177  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:20.747250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:20.793434  433881 cri.go:89] found id: ""
	I0408 12:51:20.793462  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.793472  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:20.793478  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:20.793554  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:20.830880  433881 cri.go:89] found id: ""
	I0408 12:51:20.830915  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.830925  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:20.830931  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:20.830986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:20.865660  433881 cri.go:89] found id: ""
	I0408 12:51:20.865698  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.865710  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:20.865718  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:20.865787  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:20.905977  433881 cri.go:89] found id: ""
	I0408 12:51:20.906009  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.906018  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:20.906023  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:20.906078  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:20.949244  433881 cri.go:89] found id: ""
	I0408 12:51:20.949273  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.949281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:20.949288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:20.949346  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:20.987438  433881 cri.go:89] found id: ""
	I0408 12:51:20.987466  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.987475  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:20.987482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:20.987534  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:21.028061  433881 cri.go:89] found id: ""
	I0408 12:51:21.028106  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.028123  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:21.028130  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:21.028187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:21.065115  433881 cri.go:89] found id: ""
	I0408 12:51:21.065147  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.065160  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:21.065171  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:21.065186  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:21.142100  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:21.142143  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:21.186259  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:21.186294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:21.242038  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:21.242085  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:21.257483  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:21.257526  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:21.336027  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:23.836494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:23.850931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:23.851001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:23.889352  433881 cri.go:89] found id: ""
	I0408 12:51:23.889385  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.889397  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:23.889406  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:23.889467  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:23.925240  433881 cri.go:89] found id: ""
	I0408 12:51:23.925271  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.925280  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:23.925286  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:23.925341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:23.965369  433881 cri.go:89] found id: ""
	I0408 12:51:23.965398  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.965410  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:23.965417  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:23.965478  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:24.004828  433881 cri.go:89] found id: ""
	I0408 12:51:24.004864  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.004875  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:24.004882  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:24.004955  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:24.046959  433881 cri.go:89] found id: ""
	I0408 12:51:24.046996  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.047013  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:24.047022  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:24.047104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:24.085408  433881 cri.go:89] found id: ""
	I0408 12:51:24.085447  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.085459  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:24.085468  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:24.085533  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:24.124156  433881 cri.go:89] found id: ""
	I0408 12:51:24.124193  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.124205  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:24.124214  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:24.124280  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:24.159973  433881 cri.go:89] found id: ""
	I0408 12:51:24.160011  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.160023  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:24.160037  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:24.160055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:24.238998  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:24.239047  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:24.282401  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:24.282439  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:24.339279  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:24.339328  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:24.354927  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:24.354965  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:24.432192  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:26.932361  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:26.947709  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:26.947779  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:26.992251  433881 cri.go:89] found id: ""
	I0408 12:51:26.992282  433881 logs.go:276] 0 containers: []
	W0408 12:51:26.992290  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:26.992297  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:26.992366  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:27.033517  433881 cri.go:89] found id: ""
	I0408 12:51:27.033548  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.033560  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:27.033568  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:27.033635  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:27.072593  433881 cri.go:89] found id: ""
	I0408 12:51:27.072628  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.072641  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:27.072650  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:27.072726  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:27.115728  433881 cri.go:89] found id: ""
	I0408 12:51:27.115761  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.115771  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:27.115779  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:27.115846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:27.154218  433881 cri.go:89] found id: ""
	I0408 12:51:27.154254  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.154266  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:27.154274  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:27.154341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:27.193084  433881 cri.go:89] found id: ""
	I0408 12:51:27.193118  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.193134  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:27.193142  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:27.193216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:27.233401  433881 cri.go:89] found id: ""
	I0408 12:51:27.233436  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.233449  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:27.233458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:27.233524  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:27.274272  433881 cri.go:89] found id: ""
	I0408 12:51:27.274307  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.274316  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:27.274325  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:27.274339  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:27.316918  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:27.316956  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:27.371970  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:27.372014  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.387640  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:27.387679  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:27.468583  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:27.468611  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:27.468628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.049078  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:30.063661  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:30.063750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:30.102000  433881 cri.go:89] found id: ""
	I0408 12:51:30.102031  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.102049  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:30.102058  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:30.102120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:30.144972  433881 cri.go:89] found id: ""
	I0408 12:51:30.145001  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.145010  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:30.145017  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:30.145076  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:30.185179  433881 cri.go:89] found id: ""
	I0408 12:51:30.185250  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.185274  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:30.185284  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:30.185356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:30.224138  433881 cri.go:89] found id: ""
	I0408 12:51:30.224169  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.224178  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:30.224185  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:30.224245  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:30.262754  433881 cri.go:89] found id: ""
	I0408 12:51:30.262788  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.262800  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:30.262809  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:30.262872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:30.296574  433881 cri.go:89] found id: ""
	I0408 12:51:30.296608  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.296617  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:30.296624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:30.296685  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:30.337619  433881 cri.go:89] found id: ""
	I0408 12:51:30.337653  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.337665  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:30.337672  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:30.337737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:30.378808  433881 cri.go:89] found id: ""
	I0408 12:51:30.378837  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.378849  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:30.378860  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:30.378876  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:30.462867  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:30.462895  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:30.462911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.549824  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:30.549871  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:30.594270  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:30.594302  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:30.650199  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:30.650247  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:33.166177  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:33.181168  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:33.181277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:33.220931  433881 cri.go:89] found id: ""
	I0408 12:51:33.220960  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.220970  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:33.220976  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:33.221043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:33.267118  433881 cri.go:89] found id: ""
	I0408 12:51:33.267155  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.267168  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:33.267177  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:33.267250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:33.308486  433881 cri.go:89] found id: ""
	I0408 12:51:33.308522  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.308532  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:33.308540  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:33.308614  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:33.344735  433881 cri.go:89] found id: ""
	I0408 12:51:33.344773  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.344785  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:33.344793  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:33.344857  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:33.384130  433881 cri.go:89] found id: ""
	I0408 12:51:33.384162  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.384175  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:33.384184  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:33.384246  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:33.422187  433881 cri.go:89] found id: ""
	I0408 12:51:33.422224  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.422236  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:33.422244  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:33.422309  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:33.462281  433881 cri.go:89] found id: ""
	I0408 12:51:33.462310  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.462320  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:33.462326  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:33.462412  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:33.501273  433881 cri.go:89] found id: ""
	I0408 12:51:33.501304  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.501315  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:33.501329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:33.501347  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:33.573407  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:33.573435  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:33.573453  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:33.659573  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:33.659615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:33.712568  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:33.712600  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:33.769457  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:33.769500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.285759  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:36.302490  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:36.302576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:36.341170  433881 cri.go:89] found id: ""
	I0408 12:51:36.341204  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.341218  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:36.341227  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:36.341296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:36.380366  433881 cri.go:89] found id: ""
	I0408 12:51:36.380395  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.380403  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:36.380411  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:36.380485  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:36.428755  433881 cri.go:89] found id: ""
	I0408 12:51:36.428786  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.428795  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:36.428801  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:36.428852  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:36.473849  433881 cri.go:89] found id: ""
	I0408 12:51:36.473893  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.473921  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:36.473930  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:36.474001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:36.513922  433881 cri.go:89] found id: ""
	I0408 12:51:36.513967  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.513980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:36.513989  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:36.514059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:36.557731  433881 cri.go:89] found id: ""
	I0408 12:51:36.557768  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.557777  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:36.557784  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:36.557861  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:36.601978  433881 cri.go:89] found id: ""
	I0408 12:51:36.602010  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.602020  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:36.602031  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:36.602099  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:36.645189  433881 cri.go:89] found id: ""
	I0408 12:51:36.645226  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.645244  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:36.645257  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:36.645276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:36.739293  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:36.739346  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:36.786962  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:36.787001  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:36.842456  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:36.842499  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.857848  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:36.857883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:36.939227  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:39.440047  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:39.456206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:39.456304  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:39.497752  433881 cri.go:89] found id: ""
	I0408 12:51:39.497792  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.497804  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:39.497815  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:39.497882  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:39.536192  433881 cri.go:89] found id: ""
	I0408 12:51:39.536224  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.536237  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:39.536245  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:39.536315  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:39.573874  433881 cri.go:89] found id: ""
	I0408 12:51:39.573917  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.573932  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:39.573939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:39.574004  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:39.614525  433881 cri.go:89] found id: ""
	I0408 12:51:39.614562  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.614577  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:39.614585  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:39.614651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:39.654414  433881 cri.go:89] found id: ""
	I0408 12:51:39.654455  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.654467  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:39.654476  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:39.654549  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:39.691814  433881 cri.go:89] found id: ""
	I0408 12:51:39.691847  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.691860  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:39.691868  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:39.691939  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:39.735572  433881 cri.go:89] found id: ""
	I0408 12:51:39.735609  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.735622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:39.735630  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:39.735707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:39.778827  433881 cri.go:89] found id: ""
	I0408 12:51:39.778860  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.778870  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:39.778881  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:39.778894  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:39.857861  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:39.857903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:39.901597  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:39.901652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:39.955660  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:39.955730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:39.972424  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:39.972461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:40.052884  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:42.553021  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:42.569100  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:42.569174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:42.612835  433881 cri.go:89] found id: ""
	I0408 12:51:42.612870  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.612882  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:42.612891  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:42.612965  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:42.653224  433881 cri.go:89] found id: ""
	I0408 12:51:42.653266  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.653276  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:42.653285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:42.653351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:42.703612  433881 cri.go:89] found id: ""
	I0408 12:51:42.703648  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.703658  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:42.703664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:42.703756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:42.749765  433881 cri.go:89] found id: ""
	I0408 12:51:42.749799  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.749810  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:42.749818  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:42.749894  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:42.794008  433881 cri.go:89] found id: ""
	I0408 12:51:42.794042  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.794054  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:42.794064  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:42.794132  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:42.838099  433881 cri.go:89] found id: ""
	I0408 12:51:42.838134  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.838146  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:42.838154  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:42.838223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:42.883552  433881 cri.go:89] found id: ""
	I0408 12:51:42.883589  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.883602  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:42.883615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:42.883712  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:42.922871  433881 cri.go:89] found id: ""
	I0408 12:51:42.922899  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.922910  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:42.922922  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:42.922958  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:42.979842  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:42.979885  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:42.995164  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:42.995198  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:43.075880  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:43.075906  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:43.075940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:43.164047  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:43.164113  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:45.733586  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.749054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.749158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.793132  433881 cri.go:89] found id: ""
	I0408 12:51:45.793169  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.793181  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:45.793189  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.793257  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.834562  433881 cri.go:89] found id: ""
	I0408 12:51:45.834597  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.834608  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:45.834616  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.834686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.876365  433881 cri.go:89] found id: ""
	I0408 12:51:45.876404  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.876415  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:45.876424  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.876489  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.926205  433881 cri.go:89] found id: ""
	I0408 12:51:45.926241  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.926254  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:45.926262  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.926331  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.969462  433881 cri.go:89] found id: ""
	I0408 12:51:45.969494  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.969506  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:45.969513  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.969582  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:46.011980  433881 cri.go:89] found id: ""
	I0408 12:51:46.012008  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.012031  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:46.012040  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:46.012098  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:46.054484  433881 cri.go:89] found id: ""
	I0408 12:51:46.054522  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.054533  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:46.054542  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:46.054609  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:46.094438  433881 cri.go:89] found id: ""
	I0408 12:51:46.094468  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.094477  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:46.094486  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.094503  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:46.186390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:46.186415  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.186437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.283200  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.283240  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:46.336507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.336544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.392178  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.392221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:48.908956  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:48.932321  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:48.932414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:48.988509  433881 cri.go:89] found id: ""
	I0408 12:51:48.988542  433881 logs.go:276] 0 containers: []
	W0408 12:51:48.988554  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:48.988563  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:48.988632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.026573  433881 cri.go:89] found id: ""
	I0408 12:51:49.026605  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.026613  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:49.026618  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.026681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.072625  433881 cri.go:89] found id: ""
	I0408 12:51:49.072661  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.072675  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:49.072684  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.072748  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.120630  433881 cri.go:89] found id: ""
	I0408 12:51:49.120662  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.120674  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:49.120683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.120743  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.169189  433881 cri.go:89] found id: ""
	I0408 12:51:49.169218  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.169231  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:49.169239  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.169307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.216077  433881 cri.go:89] found id: ""
	I0408 12:51:49.216115  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.216128  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:49.216141  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.216209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.258519  433881 cri.go:89] found id: ""
	I0408 12:51:49.258556  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.258568  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.258576  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:49.258658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:49.298058  433881 cri.go:89] found id: ""
	I0408 12:51:49.298092  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.298103  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:49.298117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:49.298133  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:49.351961  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.352020  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:49.369774  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:49.369822  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:49.465570  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:49.465598  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:49.465616  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:49.551701  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:49.551753  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:52.104186  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:52.125116  433881 kubeadm.go:591] duration metric: took 4m3.004969382s to restartPrimaryControlPlane
	W0408 12:51:52.125203  433881 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:52.125233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:54.046318  433881 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.921055247s)
	I0408 12:51:54.046411  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:54.061948  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:51:54.073014  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:51:54.083545  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:51:54.083566  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:51:54.083623  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:51:54.093457  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:51:54.093541  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:51:54.104924  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:51:54.114649  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:51:54.114733  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:51:54.125143  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.135209  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:51:54.135283  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.146586  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:51:54.157676  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:51:54.157740  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:51:54.168585  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:51:54.411949  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:53:50.506496  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:53:50.506736  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:53:50.508871  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:50.508975  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:50.509090  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:50.509248  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:50.509435  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:50.509546  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:50.511505  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:50.511616  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:50.511727  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:50.511838  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:50.511925  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:50.512024  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:50.512112  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:50.512228  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:50.512332  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:50.512442  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:50.512551  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:50.512608  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:50.512661  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:50.512714  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:50.512784  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:50.512866  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:50.512934  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:50.513078  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:50.513228  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:50.513285  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:50.513383  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:50.515207  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:50.515297  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:50.515380  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:50.515449  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:50.515522  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:50.515668  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:50.515756  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:53:50.515843  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516036  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516118  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516346  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516428  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516675  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516747  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516990  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517092  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.517336  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517352  433881 kubeadm.go:309] 
	I0408 12:53:50.517402  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:53:50.517453  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:53:50.517463  433881 kubeadm.go:309] 
	I0408 12:53:50.517517  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:53:50.517572  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:53:50.517743  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:53:50.517757  433881 kubeadm.go:309] 
	I0408 12:53:50.517898  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:53:50.517949  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:53:50.517999  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:53:50.518014  433881 kubeadm.go:309] 
	I0408 12:53:50.518163  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:53:50.518286  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:53:50.518297  433881 kubeadm.go:309] 
	I0408 12:53:50.518448  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:53:50.518581  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:53:50.518686  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:53:50.518747  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:53:50.518781  433881 kubeadm.go:309] 
	W0408 12:53:50.518884  433881 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:53:50.518933  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:53:50.995302  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:51.011982  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:53:51.022491  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:53:51.022512  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:53:51.022565  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:53:51.032994  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:53:51.033071  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:53:51.043529  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:53:51.053500  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:53:51.053580  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:53:51.063658  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.073397  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:53:51.073464  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.085243  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:53:51.095094  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:53:51.095165  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:53:51.105549  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:53:51.185596  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:51.185706  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:51.349502  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:51.349661  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:51.349805  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:51.557584  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:51.559567  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:51.559701  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:51.559800  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:51.559968  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:51.560065  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:51.560159  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:51.560241  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:51.560337  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:51.560443  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:51.560561  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:51.560680  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:51.560735  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:51.560826  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:51.727630  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:51.895665  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:52.087304  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:52.187789  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:52.213627  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:52.213777  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:52.213837  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:52.384599  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:52.386843  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:52.386992  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:52.389989  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:52.393527  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:52.394471  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:52.405071  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:54:32.408240  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:54:32.408440  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:32.408738  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:37.409255  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:37.409493  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:47.409946  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:47.410234  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:07.410503  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:07.410710  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.409536  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:47.410032  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.410062  433881 kubeadm.go:309] 
	I0408 12:55:47.410118  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:55:47.410216  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:55:47.410232  433881 kubeadm.go:309] 
	I0408 12:55:47.410278  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:55:47.410341  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:55:47.410503  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:55:47.410515  433881 kubeadm.go:309] 
	I0408 12:55:47.410691  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:55:47.410768  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:55:47.410833  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:55:47.410843  433881 kubeadm.go:309] 
	I0408 12:55:47.411002  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:55:47.411092  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:55:47.411099  433881 kubeadm.go:309] 
	I0408 12:55:47.411208  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:55:47.411325  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:55:47.411415  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:55:47.411523  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:55:47.411534  433881 kubeadm.go:309] 
	I0408 12:55:47.413655  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:55:47.413779  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:55:47.413887  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:55:47.414099  433881 kubeadm.go:393] duration metric: took 7m58.347147979s to StartCluster
	I0408 12:55:47.414206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:55:47.414540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:55:47.466864  433881 cri.go:89] found id: ""
	I0408 12:55:47.466899  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.466909  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:55:47.466917  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:55:47.466999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:55:47.505562  433881 cri.go:89] found id: ""
	I0408 12:55:47.505590  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.505599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:55:47.505606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:55:47.505663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:55:47.545030  433881 cri.go:89] found id: ""
	I0408 12:55:47.545063  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.545075  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:55:47.545086  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:55:47.545158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:55:47.584650  433881 cri.go:89] found id: ""
	I0408 12:55:47.584685  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.584698  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:55:47.584707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:55:47.584775  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:55:47.624857  433881 cri.go:89] found id: ""
	I0408 12:55:47.624885  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.624893  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:55:47.624900  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:55:47.624953  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:55:47.662872  433881 cri.go:89] found id: ""
	I0408 12:55:47.662910  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.662922  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:55:47.662931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:55:47.662999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:55:47.702086  433881 cri.go:89] found id: ""
	I0408 12:55:47.702132  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.702142  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:55:47.702148  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:55:47.702198  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:55:47.754880  433881 cri.go:89] found id: ""
	I0408 12:55:47.754912  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.754922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:55:47.754932  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:55:47.754946  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:55:47.839768  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:55:47.839800  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:55:47.839817  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:55:47.947231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:55:47.947281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:55:47.997692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:55:47.997725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:55:48.050804  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:55:48.050854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0408 12:55:48.067168  433881 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:55:48.067218  433881 out.go:239] * 
	* 
	W0408 12:55:48.067277  433881 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.067305  433881 out.go:239] * 
	* 
	W0408 12:55:48.068281  433881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:55:48.072609  433881 out.go:177] 
	W0408 12:55:48.074039  433881 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.074112  433881 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:55:48.074174  433881 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:55:48.076570  433881 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-384148 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 2 (271.33678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-384148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-384148 logs -n 25: (1.661907943s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo cat                                               |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo containerd config dump                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl status crio                             |                              |         |                |                     |                     |
	|         | --all --full --no-pager                                |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl cat crio                                |                              |         |                |                     |                     |
	|         | --no-pager                                             |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |                |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |                |                     |                     |
	|         | \;                                                     |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo crio config                                       |                              |         |                |                     |                     |
	| delete  | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-122490 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | disable-driver-mounts-122490                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:42:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:42:31.610028  433881 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:42:31.610291  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610300  433881 out.go:304] Setting ErrFile to fd 2...
	I0408 12:42:31.610304  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610590  433881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:42:31.611834  433881 out.go:298] Setting JSON to false
	I0408 12:42:31.613323  433881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8695,"bootTime":1712571457,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:42:31.613413  433881 start.go:139] virtualization: kvm guest
	I0408 12:42:31.615441  433881 out.go:177] * [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:42:31.617429  433881 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:42:31.617459  433881 notify.go:220] Checking for updates...
	I0408 12:42:31.618918  433881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:42:31.620434  433881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:42:31.621883  433881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:42:31.623381  433881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:42:31.624858  433881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:42:31.626731  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:42:31.627141  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.627193  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.642980  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0408 12:42:31.643395  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.644144  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.644166  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.644557  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.644768  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.646980  433881 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 12:42:31.648378  433881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:42:31.648694  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.648732  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.663924  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0408 12:42:31.664361  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.664884  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.664910  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.665218  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.665445  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.701652  433881 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:42:31.703025  433881 start.go:297] selected driver: kvm2
	I0408 12:42:31.703041  433881 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.703192  433881 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:42:31.703924  433881 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.704018  433881 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:42:31.719599  433881 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:42:31.720001  433881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:42:31.720084  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:42:31.720102  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:42:31.720156  433881 start.go:340] cluster config:
	{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.720330  433881 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.722299  433881 out.go:177] * Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	I0408 12:42:31.723540  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:42:31.723577  433881 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:42:31.723594  433881 cache.go:56] Caching tarball of preloaded images
	I0408 12:42:31.723718  433881 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:42:31.723733  433881 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:42:31.723846  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:42:31.724039  433881 start.go:360] acquireMachinesLock for old-k8s-version-384148: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:42:32.207974  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:38.288048  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:41.359947  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:47.439972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:50.512009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:56.591982  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:59.664002  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:05.744032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:08.816017  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:14.895990  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:17.967942  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:24.048010  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:27.119964  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:33.200067  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:36.272037  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:42.351972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:45.424082  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:51.503992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:54.576088  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:00.656001  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:03.728079  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:09.807949  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:12.880051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:18.960024  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:22.032036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:28.112053  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:31.183992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:37.264032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:40.336026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:46.416019  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:49.487998  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:55.568026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:58.640044  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:04.719978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:07.792028  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:13.871997  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:16.944057  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:23.023969  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:26.096051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:32.176049  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:35.247929  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:41.328036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:44.399954  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:50.480046  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:53.552034  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:59.632009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:02.704063  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:08.784031  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:11.856098  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:17.936013  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:21.007970  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:27.087978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:30.159984  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:36.240042  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:39.245220  433557 start.go:364] duration metric: took 4m33.298555643s to acquireMachinesLock for "no-preload-135234"
	I0408 12:46:39.245298  433557 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:39.245311  433557 fix.go:54] fixHost starting: 
	I0408 12:46:39.245782  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:39.245821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:39.261035  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
	I0408 12:46:39.261632  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:39.262208  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:46:39.262234  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:39.262592  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:39.262819  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:39.262938  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:46:39.264995  433557 fix.go:112] recreateIfNeeded on no-preload-135234: state=Stopped err=<nil>
	I0408 12:46:39.265029  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	W0408 12:46:39.265203  433557 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:39.266971  433557 out.go:177] * Restarting existing kvm2 VM for "no-preload-135234" ...
	I0408 12:46:39.268140  433557 main.go:141] libmachine: (no-preload-135234) Calling .Start
	I0408 12:46:39.268315  433557 main.go:141] libmachine: (no-preload-135234) Ensuring networks are active...
	I0408 12:46:39.269323  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network default is active
	I0408 12:46:39.269669  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network mk-no-preload-135234 is active
	I0408 12:46:39.270047  433557 main.go:141] libmachine: (no-preload-135234) Getting domain xml...
	I0408 12:46:39.270763  433557 main.go:141] libmachine: (no-preload-135234) Creating domain...
	I0408 12:46:40.496145  433557 main.go:141] libmachine: (no-preload-135234) Waiting to get IP...
	I0408 12:46:40.497357  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.497870  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.497950  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.497853  434768 retry.go:31] will retry after 305.764185ms: waiting for machine to come up
	I0408 12:46:40.805894  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.806351  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.806380  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.806304  434768 retry.go:31] will retry after 359.02584ms: waiting for machine to come up
	I0408 12:46:39.242442  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:39.242498  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.242871  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:46:39.242935  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.243206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:46:39.245063  433439 machine.go:97] duration metric: took 4m37.367683512s to provisionDockerMachine
	I0408 12:46:39.245112  433439 fix.go:56] duration metric: took 4m37.391017413s for fixHost
	I0408 12:46:39.245118  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 4m37.391040241s
	W0408 12:46:39.245140  433439 start.go:713] error starting host: provision: host is not running
	W0408 12:46:39.245388  433439 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0408 12:46:39.245401  433439 start.go:728] Will try again in 5 seconds ...
	I0408 12:46:41.167272  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.167748  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.167779  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.167702  434768 retry.go:31] will retry after 412.762727ms: waiting for machine to come up
	I0408 12:46:41.582454  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.582959  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.582990  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.582904  434768 retry.go:31] will retry after 572.486121ms: waiting for machine to come up
	I0408 12:46:42.156830  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.157270  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.157294  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.157243  434768 retry.go:31] will retry after 706.130574ms: waiting for machine to come up
	I0408 12:46:42.865325  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.865829  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.865863  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.865762  434768 retry.go:31] will retry after 901.114252ms: waiting for machine to come up
	I0408 12:46:43.768578  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:43.769067  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:43.769103  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:43.769032  434768 retry.go:31] will retry after 1.160836088s: waiting for machine to come up
	I0408 12:46:44.931002  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:44.931408  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:44.931438  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:44.931349  434768 retry.go:31] will retry after 998.940623ms: waiting for machine to come up
	I0408 12:46:44.247774  433439 start.go:360] acquireMachinesLock for default-k8s-diff-port-527454: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:46:45.931728  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:45.932157  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:45.932241  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:45.932115  434768 retry.go:31] will retry after 1.43975568s: waiting for machine to come up
	I0408 12:46:47.373294  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:47.373786  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:47.373821  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:47.373733  434768 retry.go:31] will retry after 1.828434336s: waiting for machine to come up
	I0408 12:46:49.205019  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:49.205414  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:49.205462  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:49.205376  434768 retry.go:31] will retry after 2.847051956s: waiting for machine to come up
	I0408 12:46:52.055004  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:52.055561  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:52.055586  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:52.055517  434768 retry.go:31] will retry after 2.941262871s: waiting for machine to come up
	I0408 12:46:54.998158  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:54.998598  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:54.998631  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:54.998542  434768 retry.go:31] will retry after 3.082026915s: waiting for machine to come up
	I0408 12:46:59.561049  433674 start.go:364] duration metric: took 4m43.922045129s to acquireMachinesLock for "embed-certs-488947"
	I0408 12:46:59.561130  433674 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:59.561140  433674 fix.go:54] fixHost starting: 
	I0408 12:46:59.561636  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:59.561683  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:59.578117  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I0408 12:46:59.578573  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:59.579047  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:46:59.579074  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:59.579432  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:59.579633  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:46:59.579852  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:46:59.581445  433674 fix.go:112] recreateIfNeeded on embed-certs-488947: state=Stopped err=<nil>
	I0408 12:46:59.581492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	W0408 12:46:59.581667  433674 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:59.584306  433674 out.go:177] * Restarting existing kvm2 VM for "embed-certs-488947" ...
	I0408 12:46:59.585750  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Start
	I0408 12:46:59.585971  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring networks are active...
	I0408 12:46:59.586749  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network default is active
	I0408 12:46:59.587136  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network mk-embed-certs-488947 is active
	I0408 12:46:59.587551  433674 main.go:141] libmachine: (embed-certs-488947) Getting domain xml...
	I0408 12:46:59.588302  433674 main.go:141] libmachine: (embed-certs-488947) Creating domain...
	I0408 12:46:58.084025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084608  433557 main.go:141] libmachine: (no-preload-135234) Found IP for machine: 192.168.61.48
	I0408 12:46:58.084660  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has current primary IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084668  433557 main.go:141] libmachine: (no-preload-135234) Reserving static IP address...
	I0408 12:46:58.085160  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.085198  433557 main.go:141] libmachine: (no-preload-135234) Reserved static IP address: 192.168.61.48
	I0408 12:46:58.085213  433557 main.go:141] libmachine: (no-preload-135234) DBG | skip adding static IP to network mk-no-preload-135234 - found existing host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"}
	I0408 12:46:58.085229  433557 main.go:141] libmachine: (no-preload-135234) DBG | Getting to WaitForSSH function...
	I0408 12:46:58.085240  433557 main.go:141] libmachine: (no-preload-135234) Waiting for SSH to be available...
	I0408 12:46:58.087595  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.087990  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.088025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.088155  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH client type: external
	I0408 12:46:58.088178  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa (-rw-------)
	I0408 12:46:58.088210  433557 main.go:141] libmachine: (no-preload-135234) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:46:58.088228  433557 main.go:141] libmachine: (no-preload-135234) DBG | About to run SSH command:
	I0408 12:46:58.088241  433557 main.go:141] libmachine: (no-preload-135234) DBG | exit 0
	I0408 12:46:58.220043  433557 main.go:141] libmachine: (no-preload-135234) DBG | SSH cmd err, output: <nil>: 
	I0408 12:46:58.220440  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetConfigRaw
	I0408 12:46:58.221216  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.223881  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224184  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.224202  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224597  433557 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/config.json ...
	I0408 12:46:58.224804  433557 machine.go:94] provisionDockerMachine start ...
	I0408 12:46:58.224828  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:58.225070  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.227668  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228048  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.228080  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228242  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.228438  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228647  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228780  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.228941  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.229238  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.229253  433557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:46:58.344562  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:46:58.344602  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.344888  433557 buildroot.go:166] provisioning hostname "no-preload-135234"
	I0408 12:46:58.344922  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.345147  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.347895  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348278  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.348311  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348433  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.348638  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348801  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348911  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.349077  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.349289  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.349303  433557 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-135234 && echo "no-preload-135234" | sudo tee /etc/hostname
	I0408 12:46:58.478959  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-135234
	
	I0408 12:46:58.478996  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.481692  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482164  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.482187  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482410  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.482643  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.482851  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.483032  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.483230  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.483446  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.483465  433557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-135234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-135234/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-135234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:46:58.606022  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:58.606059  433557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:46:58.606080  433557 buildroot.go:174] setting up certificates
	I0408 12:46:58.606092  433557 provision.go:84] configureAuth start
	I0408 12:46:58.606108  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.606465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.609605  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610046  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.610079  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.612452  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612756  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.612784  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612905  433557 provision.go:143] copyHostCerts
	I0408 12:46:58.612974  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:46:58.613029  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:46:58.613097  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:46:58.613200  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:46:58.613209  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:46:58.613232  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:46:58.613295  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:46:58.613302  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:46:58.613323  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:46:58.613438  433557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.no-preload-135234 san=[127.0.0.1 192.168.61.48 localhost minikube no-preload-135234]
	I0408 12:46:58.832264  433557 provision.go:177] copyRemoteCerts
	I0408 12:46:58.832335  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:46:58.832382  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.835259  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835609  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.835650  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835883  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.836158  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.836332  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.836468  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:58.922968  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:46:58.949601  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 12:46:58.976832  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:46:59.004643  433557 provision.go:87] duration metric: took 398.533019ms to configureAuth
	I0408 12:46:59.004683  433557 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:46:59.004885  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:46:59.004988  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.008264  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008735  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.008783  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008987  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.009238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009416  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009542  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.009680  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.009866  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.009884  433557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:46:59.299880  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:46:59.299912  433557 machine.go:97] duration metric: took 1.075094362s to provisionDockerMachine
	I0408 12:46:59.299925  433557 start.go:293] postStartSetup for "no-preload-135234" (driver="kvm2")
	I0408 12:46:59.299940  433557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:46:59.299981  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.300373  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:46:59.300406  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.303274  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303769  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.303806  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303941  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.304222  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.304575  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.304874  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.395808  433557 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:46:59.400795  433557 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:46:59.400831  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:46:59.400914  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:46:59.401021  433557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:46:59.401162  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:46:59.411883  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:46:59.438486  433557 start.go:296] duration metric: took 138.54299ms for postStartSetup
	I0408 12:46:59.438546  433557 fix.go:56] duration metric: took 20.19323532s for fixHost
	I0408 12:46:59.438577  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.441875  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442334  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.442366  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442528  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.442753  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.442969  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.443101  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.443232  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.443414  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.443424  433557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:46:59.560853  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580419.531854515
	
	I0408 12:46:59.560881  433557 fix.go:216] guest clock: 1712580419.531854515
	I0408 12:46:59.560891  433557 fix.go:229] Guest: 2024-04-08 12:46:59.531854515 +0000 UTC Remote: 2024-04-08 12:46:59.438552641 +0000 UTC m=+293.653384531 (delta=93.301874ms)
	I0408 12:46:59.560918  433557 fix.go:200] guest clock delta is within tolerance: 93.301874ms
	I0408 12:46:59.560929  433557 start.go:83] releasing machines lock for "no-preload-135234", held for 20.315655744s
	I0408 12:46:59.560965  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.561244  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:59.564248  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564623  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.564658  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564758  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565245  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565434  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565524  433557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:46:59.565571  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.565726  433557 ssh_runner.go:195] Run: cat /version.json
	I0408 12:46:59.565752  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.568339  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568729  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568766  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.568789  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568931  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569139  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569201  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.569227  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.569300  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569392  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569486  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569647  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.569782  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569900  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.689264  433557 ssh_runner.go:195] Run: systemctl --version
	I0408 12:46:59.695704  433557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:46:59.848323  433557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:46:59.856068  433557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:46:59.856171  433557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:46:59.877460  433557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:46:59.877490  433557 start.go:494] detecting cgroup driver to use...
	I0408 12:46:59.877557  433557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:46:59.895329  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:46:59.910849  433557 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:46:59.910908  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:46:59.925541  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:46:59.941511  433557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:00.064454  433557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:00.218535  433557 docker.go:233] disabling docker service ...
	I0408 12:47:00.218614  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:00.234510  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:00.249703  433557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:00.403556  433557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:00.569324  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:00.585058  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:00.607536  433557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:00.607592  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.624701  433557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:00.624774  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.637414  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.649846  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.662725  433557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:00.675738  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.688667  433557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.710326  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.722619  433557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:00.734130  433557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:00.734227  433557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:00.749998  433557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:00.761556  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:00.881544  433557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:01.036952  433557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:01.037040  433557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:01.042260  433557 start.go:562] Will wait 60s for crictl version
	I0408 12:47:01.042329  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.046327  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:01.092359  433557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:01.092465  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.127373  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.165027  433557 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0408 12:47:00.888196  433674 main.go:141] libmachine: (embed-certs-488947) Waiting to get IP...
	I0408 12:47:00.889196  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:00.889766  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:00.889808  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:00.889702  434916 retry.go:31] will retry after 239.282192ms: waiting for machine to come up
	I0408 12:47:01.130508  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.131075  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.131111  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.131016  434916 retry.go:31] will retry after 388.837258ms: waiting for machine to come up
	I0408 12:47:01.522006  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.522413  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.522444  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.522364  434916 retry.go:31] will retry after 372.310428ms: waiting for machine to come up
	I0408 12:47:01.896325  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.896919  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.896954  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.896851  434916 retry.go:31] will retry after 574.930775ms: waiting for machine to come up
	I0408 12:47:02.474045  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.474626  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.474664  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.474557  434916 retry.go:31] will retry after 506.414729ms: waiting for machine to come up
	I0408 12:47:02.982589  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.983203  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.983238  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.983135  434916 retry.go:31] will retry after 614.351996ms: waiting for machine to come up
	I0408 12:47:03.599165  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:03.599682  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:03.599724  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:03.599640  434916 retry.go:31] will retry after 1.130025801s: waiting for machine to come up
	I0408 12:47:04.731350  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:04.731841  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:04.731874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:04.731791  434916 retry.go:31] will retry after 1.346613974s: waiting for machine to come up
	I0408 12:47:01.166849  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:47:01.169772  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170183  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:01.170211  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170523  433557 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:01.175336  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:01.193759  433557 kubeadm.go:877] updating cluster {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:01.193949  433557 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 12:47:01.194017  433557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:01.234439  433557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0408 12:47:01.234466  433557 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:01.234547  433557 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.234575  433557 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.234589  433557 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.234625  433557 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0408 12:47:01.234576  433557 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.234562  433557 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.234696  433557 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.234554  433557 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.236654  433557 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.236678  433557 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.236701  433557 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0408 12:47:01.236686  433557 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.236630  433557 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236789  433557 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.475737  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.476344  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.482596  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.486680  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.490012  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.496685  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0408 12:47:01.510269  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.597119  433557 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0408 12:47:01.597179  433557 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.597238  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696018  433557 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0408 12:47:01.696123  433557 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.696148  433557 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0408 12:47:01.696196  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696201  433557 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.696237  433557 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0408 12:47:01.696254  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696265  433557 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.696299  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.710260  433557 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0408 12:47:01.710317  433557 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.710369  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799524  433557 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0408 12:47:01.799583  433557 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.799592  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.799616  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.799626  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.799618  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799679  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.799734  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.916654  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0408 12:47:01.916701  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.916783  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:01.916809  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.923863  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.923904  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.923974  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.924021  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.924065  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924176  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924067  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.926651  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0408 12:47:01.926681  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926722  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926783  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0408 12:47:01.974801  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0408 12:47:01.974875  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974939  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:01.974969  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974944  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0408 12:47:02.062944  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.916991  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.990237597s)
	I0408 12:47:04.917016  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.942055075s)
	I0408 12:47:04.917036  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0408 12:47:04.917040  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0408 12:47:04.917047  433557 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917098  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917117  433557 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.854126587s)
	I0408 12:47:04.917187  433557 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0408 12:47:04.917233  433557 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.917278  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:06.080429  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:06.080910  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:06.080942  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:06.080866  434916 retry.go:31] will retry after 1.125692215s: waiting for machine to come up
	I0408 12:47:07.208553  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:07.209015  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:07.209040  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:07.208961  434916 retry.go:31] will retry after 1.958080491s: waiting for machine to come up
	I0408 12:47:09.169878  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:09.170289  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:09.170319  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:09.170243  434916 retry.go:31] will retry after 2.241966019s: waiting for machine to come up
	I0408 12:47:08.833969  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.916836964s)
	I0408 12:47:08.834011  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0408 12:47:08.834029  433557 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834032  433557 ssh_runner.go:235] Completed: which crictl: (3.916731005s)
	I0408 12:47:08.834085  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834101  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:11.414435  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:11.414829  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:11.414851  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:11.414786  434916 retry.go:31] will retry after 2.815941766s: waiting for machine to come up
	I0408 12:47:14.233868  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:14.234272  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:14.234318  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:14.234228  434916 retry.go:31] will retry after 3.213192238s: waiting for machine to come up
	I0408 12:47:10.925471  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091353526s)
	I0408 12:47:10.925519  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0408 12:47:10.925542  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925581  433557 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.091434251s)
	I0408 12:47:10.925612  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925673  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 12:47:10.925782  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:12.405175  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.479529413s)
	I0408 12:47:12.405221  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0408 12:47:12.405238  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:12.405236  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.479424271s)
	I0408 12:47:12.405270  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0408 12:47:12.405296  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:14.283021  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (1.877693108s)
	I0408 12:47:14.283061  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0408 12:47:14.283079  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:14.283143  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:18.781552  433881 start.go:364] duration metric: took 4m47.057472647s to acquireMachinesLock for "old-k8s-version-384148"
	I0408 12:47:18.781636  433881 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:18.781645  433881 fix.go:54] fixHost starting: 
	I0408 12:47:18.782123  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:18.782168  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:18.804263  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0408 12:47:18.804759  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:18.805376  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:47:18.805407  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:18.805815  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:18.806091  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:18.806265  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetState
	I0408 12:47:18.809884  433881 fix.go:112] recreateIfNeeded on old-k8s-version-384148: state=Stopped err=<nil>
	I0408 12:47:18.809915  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	W0408 12:47:18.810103  433881 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:18.812906  433881 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-384148" ...
	I0408 12:47:17.451190  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451657  433674 main.go:141] libmachine: (embed-certs-488947) Found IP for machine: 192.168.72.159
	I0408 12:47:17.451705  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has current primary IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451725  433674 main.go:141] libmachine: (embed-certs-488947) Reserving static IP address...
	I0408 12:47:17.452192  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.452239  433674 main.go:141] libmachine: (embed-certs-488947) Reserved static IP address: 192.168.72.159
	I0408 12:47:17.452259  433674 main.go:141] libmachine: (embed-certs-488947) DBG | skip adding static IP to network mk-embed-certs-488947 - found existing host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"}
	I0408 12:47:17.452282  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Getting to WaitForSSH function...
	I0408 12:47:17.452297  433674 main.go:141] libmachine: (embed-certs-488947) Waiting for SSH to be available...
	I0408 12:47:17.454780  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455169  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.455208  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH client type: external
	I0408 12:47:17.455354  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa (-rw-------)
	I0408 12:47:17.455384  433674 main.go:141] libmachine: (embed-certs-488947) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:17.455401  433674 main.go:141] libmachine: (embed-certs-488947) DBG | About to run SSH command:
	I0408 12:47:17.455414  433674 main.go:141] libmachine: (embed-certs-488947) DBG | exit 0
	I0408 12:47:17.585037  433674 main.go:141] libmachine: (embed-certs-488947) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:17.585443  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetConfigRaw
	I0408 12:47:17.586184  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.589492  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.589953  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.589985  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.590269  433674 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/config.json ...
	I0408 12:47:17.590518  433674 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:17.590550  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:17.590798  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.593968  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594570  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.594615  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594832  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.595073  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595236  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595442  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.595661  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.595892  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.595905  433674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:17.708468  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:17.708504  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.708857  433674 buildroot.go:166] provisioning hostname "embed-certs-488947"
	I0408 12:47:17.708890  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.709083  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.712242  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712698  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.712732  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712928  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.713122  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713298  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713433  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.713612  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.713801  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.713817  433674 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-488947 && echo "embed-certs-488947" | sudo tee /etc/hostname
	I0408 12:47:17.842964  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-488947
	
	I0408 12:47:17.843017  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.846436  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.846959  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.846992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.847225  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.847486  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847726  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847945  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.848182  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.848373  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.848397  433674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-488947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-488947/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-488947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:17.975087  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:17.975123  433674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:17.975178  433674 buildroot.go:174] setting up certificates
	I0408 12:47:17.975198  433674 provision.go:84] configureAuth start
	I0408 12:47:17.975212  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.975606  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.979028  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979483  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.979510  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979754  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.982474  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.982944  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.982977  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.983174  433674 provision.go:143] copyHostCerts
	I0408 12:47:17.983230  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:17.983240  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:17.983291  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:17.983408  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:17.983419  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:17.983444  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:17.983500  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:17.983507  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:17.983526  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:17.983580  433674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.embed-certs-488947 san=[127.0.0.1 192.168.72.159 embed-certs-488947 localhost minikube]
	I0408 12:47:18.043022  433674 provision.go:177] copyRemoteCerts
	I0408 12:47:18.043092  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:18.043162  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.046335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046722  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.046757  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046904  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.047145  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.047333  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.047475  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.134761  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:18.163745  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 12:47:18.192946  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:18.220790  433674 provision.go:87] duration metric: took 245.573885ms to configureAuth
	I0408 12:47:18.220827  433674 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:18.221067  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:47:18.221175  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.224177  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.224805  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.224839  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.225098  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.225363  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225569  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225797  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.226024  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.226202  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.226219  433674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:18.522682  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:18.522718  433674 machine.go:97] duration metric: took 932.18024ms to provisionDockerMachine
	I0408 12:47:18.522735  433674 start.go:293] postStartSetup for "embed-certs-488947" (driver="kvm2")
	I0408 12:47:18.522750  433674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:18.522776  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.523133  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:18.523174  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.526523  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.526872  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.526903  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.527101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.527336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.527512  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.527692  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.615353  433674 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:18.620414  433674 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:18.620447  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:18.620525  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:18.620627  433674 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:18.620726  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:18.630585  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:18.658952  433674 start.go:296] duration metric: took 136.200863ms for postStartSetup
	I0408 12:47:18.659004  433674 fix.go:56] duration metric: took 19.097863992s for fixHost
	I0408 12:47:18.659037  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.662115  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662571  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.662606  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662843  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.663100  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663308  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663480  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.663676  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.663919  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.663939  433674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:18.781355  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580438.730334929
	
	I0408 12:47:18.781402  433674 fix.go:216] guest clock: 1712580438.730334929
	I0408 12:47:18.781427  433674 fix.go:229] Guest: 2024-04-08 12:47:18.730334929 +0000 UTC Remote: 2024-04-08 12:47:18.659010209 +0000 UTC m=+303.178294166 (delta=71.32472ms)
	I0408 12:47:18.781457  433674 fix.go:200] guest clock delta is within tolerance: 71.32472ms
	I0408 12:47:18.781465  433674 start.go:83] releasing machines lock for "embed-certs-488947", held for 19.22036189s
	I0408 12:47:18.781502  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.781800  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:18.784825  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785270  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.785313  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786104  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786456  433674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:18.786501  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.786626  433674 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:18.786660  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.789409  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789704  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790019  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790149  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790306  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790322  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790338  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790495  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790528  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790745  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.790867  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790997  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.911025  433674 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:18.917785  433674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:19.070383  433674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:19.077521  433674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:19.077606  433674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:19.094598  433674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:19.094636  433674 start.go:494] detecting cgroup driver to use...
	I0408 12:47:19.094750  433674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:19.111163  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:19.125621  433674 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:19.125688  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:19.141948  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:19.156671  433674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:19.281688  433674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:19.455445  433674 docker.go:233] disabling docker service ...
	I0408 12:47:19.455519  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:19.474594  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:19.491301  433674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:19.646063  433674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:19.786075  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:19.803535  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:19.829204  433674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:19.829282  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.842132  433674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:19.842201  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.853915  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.866449  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.879235  433674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:19.899411  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.920363  433674 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.946414  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.958824  433674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:19.969691  433674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:19.969754  433674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:19.986458  433674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:19.998655  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:20.157494  433674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:20.318209  433674 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:20.318287  433674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:20.325414  433674 start.go:562] Will wait 60s for crictl version
	I0408 12:47:20.325490  433674 ssh_runner.go:195] Run: which crictl
	I0408 12:47:20.330070  433674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:20.383808  433674 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:20.383959  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.417705  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.454321  433674 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:47:20.456101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:20.460035  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.460734  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:20.460774  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.461140  433674 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:20.467650  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:20.486936  433674 kubeadm.go:877] updating cluster {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:20.487105  433674 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:47:20.487176  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:20.529152  433674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:47:20.529293  433674 ssh_runner.go:195] Run: which lz4
	I0408 12:47:16.552712  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.26954566s)
	I0408 12:47:16.552781  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0408 12:47:16.552797  433557 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:16.552839  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:17.512103  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 12:47:17.512151  433557 cache_images.go:123] Successfully loaded all cached images
	I0408 12:47:17.512158  433557 cache_images.go:92] duration metric: took 16.277680364s to LoadCachedImages
	I0408 12:47:17.512171  433557 kubeadm.go:928] updating node { 192.168.61.48 8443 v1.30.0-rc.0 crio true true} ...
	I0408 12:47:17.512324  433557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-135234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:17.512440  433557 ssh_runner.go:195] Run: crio config
	I0408 12:47:17.561382  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:17.561424  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:17.561441  433557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:17.561472  433557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-135234 NodeName:no-preload-135234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:17.561681  433557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-135234"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:17.561807  433557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0408 12:47:17.574237  433557 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:17.574321  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:17.587129  433557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0408 12:47:17.609022  433557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0408 12:47:17.629656  433557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0408 12:47:17.650373  433557 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:17.655031  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:17.670872  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:17.811548  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:17.830945  433557 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234 for IP: 192.168.61.48
	I0408 12:47:17.830974  433557 certs.go:194] generating shared ca certs ...
	I0408 12:47:17.831000  433557 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:17.831219  433557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:17.831277  433557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:17.831290  433557 certs.go:256] generating profile certs ...
	I0408 12:47:17.831453  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/client.key
	I0408 12:47:17.831521  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key.dbd08c09
	I0408 12:47:17.831577  433557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key
	I0408 12:47:17.831823  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:17.831891  433557 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:17.831906  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:17.831946  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:17.831978  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:17.832007  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:17.832059  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:17.832899  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:17.869894  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:17.902893  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:17.943547  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:17.990462  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 12:47:18.026697  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:18.055643  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:18.083357  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:47:18.109247  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:18.134513  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:18.161811  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:18.189968  433557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:18.210173  433557 ssh_runner.go:195] Run: openssl version
	I0408 12:47:18.216813  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:18.230693  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236461  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236526  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.244183  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:18.257589  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:18.271235  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277004  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277088  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.283549  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:18.296789  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:18.309587  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314537  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314608  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.320942  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:18.333407  433557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:18.338637  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:18.345365  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:18.352262  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:18.359464  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:18.366233  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:18.373280  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:18.380134  433557 kubeadm.go:391] StartCluster: {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:18.380291  433557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:18.380403  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.423068  433557 cri.go:89] found id: ""
	I0408 12:47:18.423164  433557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:18.435458  433557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:18.435497  433557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:18.435503  433557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:18.435562  433557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:18.447509  433557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:18.448720  433557 kubeconfig.go:125] found "no-preload-135234" server: "https://192.168.61.48:8443"
	I0408 12:47:18.451154  433557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:18.463246  433557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.48
	I0408 12:47:18.463299  433557 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:18.463315  433557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:18.463394  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.522929  433557 cri.go:89] found id: ""
	I0408 12:47:18.523011  433557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:18.546346  433557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:18.558613  433557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:18.558640  433557 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:18.558714  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:18.570020  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:18.570106  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:18.581323  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:18.593718  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:18.593778  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:18.606889  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.619251  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:18.619320  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.632343  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:18.644913  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:18.645004  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:18.656965  433557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:18.670774  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:18.785507  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:19.988135  433557 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.202584017s)
	I0408 12:47:19.988174  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.235430  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.316709  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.456307  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:20.456393  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:18.814842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .Start
	I0408 12:47:18.815096  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring networks are active...
	I0408 12:47:18.816155  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network default is active
	I0408 12:47:18.816608  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network mk-old-k8s-version-384148 is active
	I0408 12:47:18.817061  433881 main.go:141] libmachine: (old-k8s-version-384148) Getting domain xml...
	I0408 12:47:18.817951  433881 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:47:20.144750  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting to get IP...
	I0408 12:47:20.145850  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.146334  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.146403  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.146320  435057 retry.go:31] will retry after 230.92081ms: waiting for machine to come up
	I0408 12:47:20.378905  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.379518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.379572  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.379474  435057 retry.go:31] will retry after 383.208004ms: waiting for machine to come up
	I0408 12:47:20.764287  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.764883  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.764936  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.764858  435057 retry.go:31] will retry after 430.674899ms: waiting for machine to come up
	I0408 12:47:21.197738  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.198231  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.198255  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.198190  435057 retry.go:31] will retry after 553.905508ms: waiting for machine to come up
	I0408 12:47:20.534154  433674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:20.538991  433674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:20.539034  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:47:22.249270  433674 crio.go:462] duration metric: took 1.715182486s to copy over tarball
	I0408 12:47:22.249391  433674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:24.966695  433674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.717265287s)
	I0408 12:47:24.966730  433674 crio.go:469] duration metric: took 2.717416948s to extract the tarball
	I0408 12:47:24.966740  433674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:25.007656  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:25.063445  433674 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:47:25.063482  433674 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:47:25.063494  433674 kubeadm.go:928] updating node { 192.168.72.159 8443 v1.29.3 crio true true} ...
	I0408 12:47:25.063627  433674 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-488947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:25.063745  433674 ssh_runner.go:195] Run: crio config
	I0408 12:47:25.122219  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:25.122282  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:25.122298  433674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:25.122330  433674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-488947 NodeName:embed-certs-488947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:25.122556  433674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-488947"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:25.122633  433674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:47:25.137001  433674 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:25.137148  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:25.151168  433674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0408 12:47:25.171698  433674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:25.195101  433674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0408 12:47:25.216873  433674 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:25.221155  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:25.235740  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:25.354135  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:25.377763  433674 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947 for IP: 192.168.72.159
	I0408 12:47:25.377801  433674 certs.go:194] generating shared ca certs ...
	I0408 12:47:25.377824  433674 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:25.378055  433674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:25.378137  433674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:25.378161  433674 certs.go:256] generating profile certs ...
	I0408 12:47:25.378299  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/client.key
	I0408 12:47:25.378391  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key.21d2a89c
	I0408 12:47:25.378460  433674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key
	I0408 12:47:25.378628  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:25.378687  433674 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:25.378702  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:25.378736  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:25.378780  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:25.378818  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:25.378888  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:25.379800  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:25.422370  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:25.468967  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:25.516750  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:20.956916  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.456948  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.957498  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.982763  433557 api_server.go:72] duration metric: took 1.526450888s to wait for apiserver process to appear ...
	I0408 12:47:21.982797  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:21.982852  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.363696  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.363732  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.363758  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.398003  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.398065  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.483280  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:21.754065  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.754814  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.754849  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.754719  435057 retry.go:31] will retry after 678.896106ms: waiting for machine to come up
	I0408 12:47:22.435899  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:22.436481  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:22.436518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:22.436426  435057 retry.go:31] will retry after 624.721191ms: waiting for machine to come up
	I0408 12:47:23.063619  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:23.064268  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:23.064290  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:23.064183  435057 retry.go:31] will retry after 1.072067437s: waiting for machine to come up
	I0408 12:47:24.137999  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:24.138573  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:24.138607  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:24.138517  435057 retry.go:31] will retry after 1.238721936s: waiting for machine to come up
	I0408 12:47:25.378512  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:25.378929  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:25.378956  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:25.378819  435057 retry.go:31] will retry after 1.314708825s: waiting for machine to come up
	I0408 12:47:26.461241  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.461305  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.461321  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.482518  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.482566  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.483554  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.497035  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.497075  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.983270  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.996515  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.996556  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.483125  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.491506  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.491549  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.983839  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.991044  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.991090  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.483669  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.490665  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:28.490703  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.983248  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.998278  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:47:29.007388  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:47:29.007429  433557 api_server.go:131] duration metric: took 7.024624495s to wait for apiserver health ...
	I0408 12:47:29.007444  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:29.007452  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:29.009506  433557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:25.561601  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0408 12:47:26.087896  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:26.116559  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:26.145651  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:26.174910  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:26.206627  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:26.238398  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:26.281684  433674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:26.306417  433674 ssh_runner.go:195] Run: openssl version
	I0408 12:47:26.313279  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:26.328106  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333727  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333810  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.340200  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:26.352316  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:26.364788  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.369928  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.370003  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.376525  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:26.388232  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:26.400301  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405327  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405407  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.411586  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:26.423764  433674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:26.428995  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:26.435932  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:26.442742  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:26.451458  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:26.458715  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:26.466424  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:26.473948  433674 kubeadm.go:391] StartCluster: {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:26.474083  433674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:26.474158  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.515603  433674 cri.go:89] found id: ""
	I0408 12:47:26.515676  433674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:26.526818  433674 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:26.526845  433674 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:26.526851  433674 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:26.526908  433674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:26.537675  433674 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:26.538807  433674 kubeconfig.go:125] found "embed-certs-488947" server: "https://192.168.72.159:8443"
	I0408 12:47:26.540848  433674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:26.551278  433674 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.159
	I0408 12:47:26.551317  433674 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:26.551330  433674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:26.551406  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.591372  433674 cri.go:89] found id: ""
	I0408 12:47:26.591478  433674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:26.610486  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:26.621770  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:26.621794  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:26.621869  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:26.632480  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:26.632554  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:26.645878  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:26.659969  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:26.660068  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:26.670611  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.680945  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:26.681034  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.692201  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:26.703049  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:26.703126  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:26.715887  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:26.727464  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:26.956245  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.722655  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.973294  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.086774  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.203640  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:28.203755  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:28.704550  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.203852  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.704305  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.724333  433674 api_server.go:72] duration metric: took 1.520681062s to wait for apiserver process to appear ...
	I0408 12:47:29.724372  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:29.724402  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:29.010843  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:29.029631  433557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:29.052609  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:29.069954  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:29.070010  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:29.070022  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:29.070034  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:29.070043  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:29.070049  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:47:29.070076  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:29.070087  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:29.070098  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:47:29.070107  433557 system_pods.go:74] duration metric: took 17.469317ms to wait for pod list to return data ...
	I0408 12:47:29.070117  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:29.075401  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:29.075443  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:29.075459  433557 node_conditions.go:105] duration metric: took 5.335891ms to run NodePressure ...
	I0408 12:47:29.075489  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:29.403218  433557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409235  433557 kubeadm.go:733] kubelet initialised
	I0408 12:47:29.409263  433557 kubeadm.go:734] duration metric: took 6.014758ms waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409276  433557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:29.418787  433557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.441264  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441310  433557 pod_ready.go:81] duration metric: took 22.478832ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.441325  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441336  433557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.461805  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461916  433557 pod_ready.go:81] duration metric: took 20.564997ms for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.461945  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461982  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.475160  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475198  433557 pod_ready.go:81] duration metric: took 13.191566ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.475229  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475241  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.486266  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486306  433557 pod_ready.go:81] duration metric: took 11.046794ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.486321  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486331  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.857658  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857703  433557 pod_ready.go:81] duration metric: took 371.357848ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.857717  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857725  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.258154  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258194  433557 pod_ready.go:81] duration metric: took 400.459219ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.258208  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258230  433557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.656845  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656890  433557 pod_ready.go:81] duration metric: took 398.64565ms for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.656904  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656915  433557 pod_ready.go:38] duration metric: took 1.247627349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:30.656947  433557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:47:30.683024  433557 ops.go:34] apiserver oom_adj: -16
	I0408 12:47:30.683055  433557 kubeadm.go:591] duration metric: took 12.247545723s to restartPrimaryControlPlane
	I0408 12:47:30.683067  433557 kubeadm.go:393] duration metric: took 12.302946s to StartCluster
	I0408 12:47:30.683095  433557 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.683214  433557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:30.685507  433557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.685852  433557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:47:30.687967  433557 out.go:177] * Verifying Kubernetes components...
	I0408 12:47:30.685951  433557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:47:30.686122  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:47:30.689462  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:30.689475  433557 addons.go:69] Setting storage-provisioner=true in profile "no-preload-135234"
	I0408 12:47:30.689511  433557 addons.go:234] Setting addon storage-provisioner=true in "no-preload-135234"
	W0408 12:47:30.689521  433557 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:47:30.689555  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.689573  433557 addons.go:69] Setting default-storageclass=true in profile "no-preload-135234"
	I0408 12:47:30.689620  433557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-135234"
	I0408 12:47:30.689956  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.689995  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.689996  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690026  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.690085  433557 addons.go:69] Setting metrics-server=true in profile "no-preload-135234"
	I0408 12:47:30.690135  433557 addons.go:234] Setting addon metrics-server=true in "no-preload-135234"
	W0408 12:47:30.690146  433557 addons.go:243] addon metrics-server should already be in state true
	I0408 12:47:30.690186  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.690614  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690692  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.710746  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0408 12:47:30.710947  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0408 12:47:30.711153  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0408 12:47:30.711301  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711752  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711839  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.712010  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712027  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712564  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.712757  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712780  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712911  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712926  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.713381  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.713427  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.713660  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714094  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714304  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.714365  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.714401  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.717892  433557 addons.go:234] Setting addon default-storageclass=true in "no-preload-135234"
	W0408 12:47:30.717959  433557 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:47:30.718004  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.718497  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.718577  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.734825  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0408 12:47:30.736890  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I0408 12:47:30.756599  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.756681  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.757290  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757312  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757318  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757332  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757774  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.757849  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.758015  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.758082  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.760658  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.760732  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.762999  433557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:47:30.764689  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:47:30.764714  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:47:30.766392  433557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:30.764741  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.767890  433557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:30.767911  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:47:30.767933  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.772580  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.772714  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773015  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773038  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773423  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773449  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773462  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773663  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773875  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.773897  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.774038  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774074  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774163  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.774227  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.779694  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0408 12:47:30.780190  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.780772  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.780793  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.781114  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.781773  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.781821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.803661  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0408 12:47:30.804212  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.804828  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.804847  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.805397  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.805713  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.807761  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.808244  433557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:30.808269  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:47:30.808288  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.811598  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812078  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.812109  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812264  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.812465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.812702  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.812868  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:26.695466  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:26.835234  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:26.835265  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:26.695884  435057 retry.go:31] will retry after 1.93787314s: waiting for machine to come up
	I0408 12:47:28.635479  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:28.636019  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:28.636052  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:28.635935  435057 retry.go:31] will retry after 1.906126524s: waiting for machine to come up
	I0408 12:47:30.544699  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:30.545145  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:30.545165  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:30.545084  435057 retry.go:31] will retry after 3.291404288s: waiting for machine to come up
	I0408 12:47:30.979880  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:31.004961  433557 node_ready.go:35] waiting up to 6m0s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:31.088114  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:31.110971  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:47:31.111017  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:47:31.150193  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:47:31.150229  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:47:31.184811  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.184899  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:47:31.214364  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.244802  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:32.406228  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.318067686s)
	I0408 12:47:32.406305  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406317  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.406830  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.406897  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.406913  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406921  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.407242  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407275  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407319  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.407329  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.532524  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.318098791s)
	I0408 12:47:32.532662  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532694  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.532576  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.287674494s)
	I0408 12:47:32.532774  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532799  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533022  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533041  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533052  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533060  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533223  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533280  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533286  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533294  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533301  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533457  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533516  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533539  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533546  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.534974  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.534991  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.535019  433557 addons.go:470] Verifying addon metrics-server=true in "no-preload-135234"
	I0408 12:47:32.543151  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.543183  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.543549  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.543571  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.546033  433557 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0408 12:47:32.894282  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:32.894320  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:32.894336  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:32.988397  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:32.988442  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.224790  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.232146  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.232176  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.724683  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.729479  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.729520  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:34.224919  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:34.230233  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:47:34.247835  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:47:34.247872  433674 api_server.go:131] duration metric: took 4.523492127s to wait for apiserver health ...
	I0408 12:47:34.247883  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:34.247890  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:34.249807  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:34.251603  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:34.265254  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:34.288078  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:34.301533  433674 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:34.301570  433674 system_pods.go:61] "coredns-76f75df574-hq2mm" [cfc7bd40-0b7d-4e00-ac55-b3ae796018ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:34.301577  433674 system_pods.go:61] "etcd-embed-certs-488947" [eb29ace5-8ad9-4080-a875-2eb83dcea583] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:34.301585  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [8e97033f-996a-4b64-9474-7b4d562eb1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:34.301591  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [b3db7631-d953-418e-9c72-f299d0287a2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:34.301595  433674 system_pods.go:61] "kube-proxy-2gn8m" [c31d8f0d-d6c1-4afa-b64c-7fc422d493f2] Running
	I0408 12:47:34.301600  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b9b29f85-7a75-4b09-b6cd-940ff42326d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:34.301604  433674 system_pods.go:61] "metrics-server-57f55c9bc5-z2ztl" [d9dc47ad-3370-4e55-a724-8c529c723992] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:34.301607  433674 system_pods.go:61] "storage-provisioner" [4953dc3a-31ca-464d-9530-34f488ed9a02] Running
	I0408 12:47:34.301617  433674 system_pods.go:74] duration metric: took 13.514139ms to wait for pod list to return data ...
	I0408 12:47:34.301624  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:34.305931  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:34.305962  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:34.305974  433674 node_conditions.go:105] duration metric: took 4.345624ms to run NodePressure ...
	I0408 12:47:34.305993  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:34.598392  433674 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603606  433674 kubeadm.go:733] kubelet initialised
	I0408 12:47:34.603632  433674 kubeadm.go:734] duration metric: took 5.204237ms waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603641  433674 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:34.610027  433674 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:32.547718  433557 addons.go:505] duration metric: took 1.861769291s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0408 12:47:33.008857  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:35.510251  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:33.837729  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:33.838183  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:33.838213  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:33.838133  435057 retry.go:31] will retry after 3.949072436s: waiting for machine to come up
	I0408 12:47:39.502172  433439 start.go:364] duration metric: took 55.254308447s to acquireMachinesLock for "default-k8s-diff-port-527454"
	I0408 12:47:39.502232  433439 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:39.502245  433439 fix.go:54] fixHost starting: 
	I0408 12:47:39.502725  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:39.502767  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:39.523738  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0408 12:47:39.525022  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:39.525614  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:47:39.525646  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:39.526077  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:39.526307  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:47:39.526448  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:47:39.528207  433439 fix.go:112] recreateIfNeeded on default-k8s-diff-port-527454: state=Stopped err=<nil>
	I0408 12:47:39.528241  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	W0408 12:47:39.528449  433439 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:39.530360  433439 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-527454" ...
	I0408 12:47:36.618430  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.619713  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.009213  433557 node_ready.go:49] node "no-preload-135234" has status "Ready":"True"
	I0408 12:47:38.009241  433557 node_ready.go:38] duration metric: took 7.004239102s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:38.009250  433557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:38.014665  433557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020024  433557 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:38.020054  433557 pod_ready.go:81] duration metric: took 5.358174ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020067  433557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:40.030803  433557 pod_ready.go:102] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:37.789177  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789704  433881 main.go:141] libmachine: (old-k8s-version-384148) Found IP for machine: 192.168.39.245
	I0408 12:47:37.789740  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has current primary IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789750  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserving static IP address...
	I0408 12:47:37.790172  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.790212  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | skip adding static IP to network mk-old-k8s-version-384148 - found existing host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"}
	I0408 12:47:37.790227  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserved static IP address: 192.168.39.245
	I0408 12:47:37.790244  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting for SSH to be available...
	I0408 12:47:37.790259  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Getting to WaitForSSH function...
	I0408 12:47:37.792465  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792793  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.792829  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792884  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH client type: external
	I0408 12:47:37.792932  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa (-rw-------)
	I0408 12:47:37.792974  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:37.793007  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | About to run SSH command:
	I0408 12:47:37.793018  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | exit 0
	I0408 12:47:37.920427  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:37.920854  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:47:37.921644  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:37.924168  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924631  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.924663  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924954  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:47:37.925170  433881 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:37.925191  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:37.925526  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:37.928176  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928552  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.928583  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928740  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:37.928916  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929095  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929260  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:37.929421  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:37.929626  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:37.929637  433881 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:38.044349  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:38.044378  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044695  433881 buildroot.go:166] provisioning hostname "old-k8s-version-384148"
	I0408 12:47:38.044728  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044955  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.047788  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048116  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.048149  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048291  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.048487  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.049024  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.049242  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.049258  433881 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-384148 && echo "old-k8s-version-384148" | sudo tee /etc/hostname
	I0408 12:47:38.175102  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-384148
	
	I0408 12:47:38.175132  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.178015  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178431  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.178461  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178659  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.178872  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179057  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179198  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.179347  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.179578  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.179604  433881 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-384148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-384148/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-384148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:38.306997  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:38.307037  433881 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:38.307072  433881 buildroot.go:174] setting up certificates
	I0408 12:47:38.307088  433881 provision.go:84] configureAuth start
	I0408 12:47:38.307099  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.307464  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:38.310078  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310595  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.310643  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.313155  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313521  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.313551  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313694  433881 provision.go:143] copyHostCerts
	I0408 12:47:38.313748  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:38.313768  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:38.313829  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:38.313919  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:38.313927  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:38.313945  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:38.314007  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:38.314014  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:38.314031  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:38.314080  433881 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-384148 san=[127.0.0.1 192.168.39.245 localhost minikube old-k8s-version-384148]
	I0408 12:47:38.748791  433881 provision.go:177] copyRemoteCerts
	I0408 12:47:38.748865  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:38.748895  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.752034  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752458  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.752499  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752695  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.752900  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.753075  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.753266  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:38.849144  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:47:38.880279  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:38.907293  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:38.936116  433881 provision.go:87] duration metric: took 629.014723ms to configureAuth
	I0408 12:47:38.936152  433881 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:38.936321  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:47:38.936403  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.939013  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939399  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.939457  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939593  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.939861  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940059  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940215  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.940377  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.940622  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.940648  433881 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:39.241516  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:39.241543  433881 machine.go:97] duration metric: took 1.316359736s to provisionDockerMachine
	I0408 12:47:39.241554  433881 start.go:293] postStartSetup for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:47:39.241566  433881 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:39.241585  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.241901  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:39.241935  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.244908  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245307  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.245336  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245486  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.245692  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.245890  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.246051  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.333612  433881 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:39.338826  433881 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:39.338853  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:39.338919  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:39.338988  433881 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:39.339071  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:39.352064  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:39.380881  433881 start.go:296] duration metric: took 139.30723ms for postStartSetup
	I0408 12:47:39.380939  433881 fix.go:56] duration metric: took 20.599293118s for fixHost
	I0408 12:47:39.380970  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.384147  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384556  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.384610  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384795  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.385010  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385212  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385411  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.385627  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:39.385869  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:39.385885  433881 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:39.501982  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580459.470646239
	
	I0408 12:47:39.502031  433881 fix.go:216] guest clock: 1712580459.470646239
	I0408 12:47:39.502042  433881 fix.go:229] Guest: 2024-04-08 12:47:39.470646239 +0000 UTC Remote: 2024-04-08 12:47:39.38094595 +0000 UTC m=+307.818603739 (delta=89.700289ms)
	I0408 12:47:39.502073  433881 fix.go:200] guest clock delta is within tolerance: 89.700289ms
	I0408 12:47:39.502084  433881 start.go:83] releasing machines lock for "old-k8s-version-384148", held for 20.720472846s
	I0408 12:47:39.502114  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.502407  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:39.505864  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506319  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.506352  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506704  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507318  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507574  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507677  433881 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:39.507767  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.507908  433881 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:39.507932  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.510993  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511077  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511476  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511522  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511563  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511589  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511743  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511923  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512084  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512093  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512239  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512246  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.512413  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.633304  433881 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:39.642014  433881 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:39.804068  433881 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:39.812237  433881 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:39.812324  433881 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:39.835586  433881 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:39.835621  433881 start.go:494] detecting cgroup driver to use...
	I0408 12:47:39.835721  433881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:39.860378  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:39.882019  433881 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:39.882096  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:39.898112  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:39.913562  433881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:40.047449  433881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:40.188730  433881 docker.go:233] disabling docker service ...
	I0408 12:47:40.188822  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:40.205050  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:40.222432  433881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:40.386332  433881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:40.561583  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:40.582135  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:40.611648  433881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:47:40.611751  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.629357  433881 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:40.629458  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.646030  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.661349  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.674997  433881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:40.688255  433881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:40.706703  433881 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:40.706763  433881 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:40.724839  433881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:40.738018  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:40.906300  433881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:41.073054  433881 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:41.073141  433881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:41.078610  433881 start.go:562] Will wait 60s for crictl version
	I0408 12:47:41.078679  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:41.083133  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:41.126948  433881 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:41.127101  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.160091  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.195044  433881 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:47:41.196514  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:41.199376  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.199831  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:41.199860  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.200145  433881 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:41.204867  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:41.221274  433881 kubeadm.go:877] updating cluster {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:41.221469  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:47:41.221550  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:41.275430  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:41.275531  433881 ssh_runner.go:195] Run: which lz4
	I0408 12:47:41.280606  433881 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:41.285549  433881 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:41.285606  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:47:39.531815  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Start
	I0408 12:47:39.531988  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring networks are active...
	I0408 12:47:39.532969  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network default is active
	I0408 12:47:39.533486  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network mk-default-k8s-diff-port-527454 is active
	I0408 12:47:39.533947  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Getting domain xml...
	I0408 12:47:39.534767  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Creating domain...
	I0408 12:47:40.935150  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting to get IP...
	I0408 12:47:40.936250  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:40.936778  435248 retry.go:31] will retry after 215.442539ms: waiting for machine to come up
	I0408 12:47:41.154393  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154940  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.154852  435248 retry.go:31] will retry after 274.982374ms: waiting for machine to come up
	I0408 12:47:41.431442  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.431990  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.432023  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.431933  435248 retry.go:31] will retry after 335.077282ms: waiting for machine to come up
	I0408 12:47:40.620537  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:42.622241  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:44.118493  433674 pod_ready.go:92] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.118532  433674 pod_ready.go:81] duration metric: took 9.508474788s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.118545  433674 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626843  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.626869  433674 pod_ready.go:81] duration metric: took 508.318376ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626882  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633488  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.633521  433674 pod_ready.go:81] duration metric: took 6.630145ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633535  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027744  433557 pod_ready.go:92] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.027771  433557 pod_ready.go:81] duration metric: took 3.007695895s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027782  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034038  433557 pod_ready.go:92] pod "kube-apiserver-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.034076  433557 pod_ready.go:81] duration metric: took 6.28617ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034090  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039232  433557 pod_ready.go:92] pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.039262  433557 pod_ready.go:81] duration metric: took 5.161613ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039277  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045793  433557 pod_ready.go:92] pod "kube-proxy-tr6td" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.045887  433557 pod_ready.go:81] duration metric: took 6.600896ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045908  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.209976  433557 pod_ready.go:92] pod "kube-scheduler-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.210003  433557 pod_ready.go:81] duration metric: took 164.085848ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.210018  433557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:43.220338  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:45.718170  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:43.224219  433881 crio.go:462] duration metric: took 1.943671791s to copy over tarball
	I0408 12:47:43.224306  433881 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:41.768734  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769194  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.769131  435248 retry.go:31] will retry after 581.590127ms: waiting for machine to come up
	I0408 12:47:42.352156  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.352975  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.353017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:42.352850  435248 retry.go:31] will retry after 673.545679ms: waiting for machine to come up
	I0408 12:47:43.028329  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029066  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029101  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.028956  435248 retry.go:31] will retry after 690.795418ms: waiting for machine to come up
	I0408 12:47:43.721435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.721999  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.722025  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.721948  435248 retry.go:31] will retry after 941.917321ms: waiting for machine to come up
	I0408 12:47:44.665002  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665468  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665495  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:44.665406  435248 retry.go:31] will retry after 1.037587737s: waiting for machine to come up
	I0408 12:47:45.705319  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705792  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705822  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:45.705730  435248 retry.go:31] will retry after 1.287151334s: waiting for machine to come up
	I0408 12:47:46.640995  433674 pod_ready.go:102] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:48.558627  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.558666  433674 pod_ready.go:81] duration metric: took 3.925119514s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.558683  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583378  433674 pod_ready.go:92] pod "kube-proxy-2gn8m" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.583405  433674 pod_ready.go:81] duration metric: took 24.715384ms for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583416  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598937  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.598969  433674 pod_ready.go:81] duration metric: took 15.544342ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598983  433674 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:47.918307  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:50.219513  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:46.621677  433881 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.397321627s)
	I0408 12:47:46.881725  433881 crio.go:469] duration metric: took 3.657463869s to extract the tarball
	I0408 12:47:46.881748  433881 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:46.936087  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:46.980999  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:46.981031  433881 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:46.981086  433881 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.981115  433881 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.981160  433881 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:46.981180  433881 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.981197  433881 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.981206  433881 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.981332  433881 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.981525  433881 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.983461  433881 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983449  433881 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.983481  433881 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.983501  433881 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.983517  433881 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.983495  433881 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.215815  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.218682  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.218812  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:47:47.226057  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.237986  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.249572  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.255059  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.331367  433881 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:47:47.331429  433881 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.331484  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.403757  433881 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:47:47.403846  433881 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.403899  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.408643  433881 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:47:47.408702  433881 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:47:47.408755  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443551  433881 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:47:47.443589  433881 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:47:47.443609  433881 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.443626  433881 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.443678  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443682  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453637  433881 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:47:47.453695  433881 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.453749  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453825  433881 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:47:47.453864  433881 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.453884  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.453908  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453990  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.454014  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:47:47.456910  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.457446  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.569243  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:47:47.569295  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.569320  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.583668  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:47:47.583967  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:47:47.589545  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:47:47.589707  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:47:47.638036  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:47:47.639955  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:47:47.860567  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:48.010273  433881 cache_images.go:92] duration metric: took 1.029223281s to LoadCachedImages
	W0408 12:47:48.010419  433881 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0408 12:47:48.010440  433881 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.20.0 crio true true} ...
	I0408 12:47:48.010631  433881 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-384148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:48.010729  433881 ssh_runner.go:195] Run: crio config
	I0408 12:47:48.065431  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:47:48.065461  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:48.065478  433881 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:48.065504  433881 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-384148 NodeName:old-k8s-version-384148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:47:48.065684  433881 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-384148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:48.065779  433881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:47:48.080840  433881 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:48.080950  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:48.094581  433881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 12:47:48.117392  433881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:48.138262  433881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:47:48.165039  433881 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:48.171191  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:48.189417  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:48.341553  433881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:48.363215  433881 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148 for IP: 192.168.39.245
	I0408 12:47:48.363249  433881 certs.go:194] generating shared ca certs ...
	I0408 12:47:48.363272  433881 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:48.363473  433881 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:48.363571  433881 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:48.363589  433881 certs.go:256] generating profile certs ...
	I0408 12:47:48.426881  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key
	I0408 12:47:48.427040  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1
	I0408 12:47:48.427110  433881 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key
	I0408 12:47:48.427261  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:48.427310  433881 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:48.427321  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:48.427354  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:48.427422  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:48.427462  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:48.427523  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:48.428524  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:48.476520  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:48.522452  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:48.561710  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:48.607052  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 12:47:48.651541  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:48.704207  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:48.742684  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:48.772703  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:48.803476  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:48.833154  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:48.863183  433881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:48.885940  433881 ssh_runner.go:195] Run: openssl version
	I0408 12:47:48.894847  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:48.910969  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916386  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916449  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.923008  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:48.936122  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:48.952344  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957735  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957815  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.964720  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:48.978862  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:48.993113  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998835  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998906  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:49.005710  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:49.019197  433881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:49.024728  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:49.031831  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:49.038736  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:49.045946  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:49.053040  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:49.060064  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:49.066969  433881 kubeadm.go:391] StartCluster: {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:49.067090  433881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:49.067156  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.107266  433881 cri.go:89] found id: ""
	I0408 12:47:49.107336  433881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:49.120092  433881 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:49.120126  433881 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:49.120132  433881 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:49.120190  433881 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:49.133500  433881 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:49.134686  433881 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:49.135619  433881 kubeconfig.go:62] /home/jenkins/minikube-integration/18588-368424/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-384148" cluster setting kubeconfig missing "old-k8s-version-384148" context setting]
	I0408 12:47:49.136897  433881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:49.139048  433881 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:49.154878  433881 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.245
	I0408 12:47:49.154925  433881 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:49.154941  433881 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:49.155009  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.207364  433881 cri.go:89] found id: ""
	I0408 12:47:49.207445  433881 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:49.228390  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:49.245160  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:49.245193  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:49.245266  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:49.256832  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:49.256913  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:49.268773  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:49.282821  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:49.282898  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:49.297896  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.312075  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:49.312158  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.327398  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:49.341467  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:49.341604  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:49.354096  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:49.366717  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:49.514951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.442724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.716276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.833506  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.927655  433881 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:50.927798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:51.428588  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:46.994162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994640  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994672  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:46.994593  435248 retry.go:31] will retry after 1.863771905s: waiting for machine to come up
	I0408 12:47:48.860673  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861257  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:48.861151  435248 retry.go:31] will retry after 2.204894376s: waiting for machine to come up
	I0408 12:47:51.067423  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067909  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067937  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:51.067864  435248 retry.go:31] will retry after 2.625423179s: waiting for machine to come up
	I0408 12:47:50.608007  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:53.108084  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:52.717545  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:55.218944  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:51.928035  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.427844  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.928718  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.927869  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.428707  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.928798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.427884  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.928273  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:56.427941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.695295  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695826  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695862  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:53.695772  435248 retry.go:31] will retry after 4.111917473s: waiting for machine to come up
	I0408 12:47:55.606909  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:58.111708  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:57.717559  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:59.718066  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:56.927927  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.428068  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.928800  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.427871  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.927822  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.428740  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.927924  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.427948  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.928792  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:01.428657  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.809179  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809697  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809729  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:57.809632  435248 retry.go:31] will retry after 4.27502806s: waiting for machine to come up
	I0408 12:48:02.086033  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086558  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has current primary IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086586  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Found IP for machine: 192.168.50.7
	I0408 12:48:02.086603  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserving static IP address...
	I0408 12:48:02.087069  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.087105  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserved static IP address: 192.168.50.7
	I0408 12:48:02.087137  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | skip adding static IP to network mk-default-k8s-diff-port-527454 - found existing host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"}
	I0408 12:48:02.087158  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Getting to WaitForSSH function...
	I0408 12:48:02.087177  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for SSH to be available...
	I0408 12:48:02.089228  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089585  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.089608  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH client type: external
	I0408 12:48:02.089840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa (-rw-------)
	I0408 12:48:02.089885  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:48:02.089900  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | About to run SSH command:
	I0408 12:48:02.089917  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | exit 0
	I0408 12:48:02.216245  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | SSH cmd err, output: <nil>: 
	I0408 12:48:02.216684  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetConfigRaw
	I0408 12:48:02.217582  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.220543  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.220961  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.220995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.221282  433439 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/config.json ...
	I0408 12:48:02.221480  433439 machine.go:94] provisionDockerMachine start ...
	I0408 12:48:02.221499  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:02.221738  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.224371  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.224770  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.224802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.225007  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.225236  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225399  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225548  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.225740  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.225957  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.225970  433439 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:48:02.336716  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:48:02.336754  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337074  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:48:02.337108  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337351  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.340133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340539  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.340583  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340653  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.340842  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341016  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341171  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.341346  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.341539  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.341556  433439 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-527454 && echo "default-k8s-diff-port-527454" | sudo tee /etc/hostname
	I0408 12:48:02.464462  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-527454
	
	I0408 12:48:02.464507  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.467682  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468082  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.468118  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468335  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.468595  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468782  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468954  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.469154  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.469372  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.469392  433439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-527454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-527454/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-527454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:48:02.593971  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:48:02.594006  433439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:48:02.594061  433439 buildroot.go:174] setting up certificates
	I0408 12:48:02.594078  433439 provision.go:84] configureAuth start
	I0408 12:48:02.594092  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.594431  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.597587  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.598043  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.600898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601267  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.601299  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601497  433439 provision.go:143] copyHostCerts
	I0408 12:48:02.601562  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:48:02.601588  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:48:02.601653  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:48:02.601841  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:48:02.601857  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:48:02.601888  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:48:02.601966  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:48:02.601981  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:48:02.602010  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:48:02.602088  433439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-527454 san=[127.0.0.1 192.168.50.7 default-k8s-diff-port-527454 localhost minikube]
	I0408 12:48:02.845116  433439 provision.go:177] copyRemoteCerts
	I0408 12:48:02.845190  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:48:02.845217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.848054  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.848406  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848559  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.848817  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.848986  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.849125  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:02.934223  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:48:02.962726  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0408 12:48:02.992767  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:48:03.021973  433439 provision.go:87] duration metric: took 427.87874ms to configureAuth
	I0408 12:48:03.022009  433439 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:48:03.022270  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:48:03.022382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.025407  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025765  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.025802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025959  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.026215  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026379  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026510  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.026659  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.026834  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.026856  433439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:48:03.310263  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:48:03.310307  433439 machine.go:97] duration metric: took 1.088813603s to provisionDockerMachine
	I0408 12:48:03.310323  433439 start.go:293] postStartSetup for "default-k8s-diff-port-527454" (driver="kvm2")
	I0408 12:48:03.310337  433439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:48:03.310362  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.310758  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:48:03.310799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.313533  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.313968  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.314001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.314201  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.314375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.314584  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.314760  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.400087  433439 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:48:03.405240  433439 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:48:03.405272  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:48:03.405351  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:48:03.405450  433439 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:48:03.405570  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:48:03.415947  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:03.448935  433439 start.go:296] duration metric: took 138.593583ms for postStartSetup
	I0408 12:48:03.449025  433439 fix.go:56] duration metric: took 23.946779964s for fixHost
	I0408 12:48:03.449055  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.452026  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452392  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.452435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452630  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.452844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453063  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453248  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.453420  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.453604  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.453615  433439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:48:03.565710  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580483.551031252
	
	I0408 12:48:03.565738  433439 fix.go:216] guest clock: 1712580483.551031252
	I0408 12:48:03.565750  433439 fix.go:229] Guest: 2024-04-08 12:48:03.551031252 +0000 UTC Remote: 2024-04-08 12:48:03.44903588 +0000 UTC m=+361.760256784 (delta=101.995372ms)
	I0408 12:48:03.565777  433439 fix.go:200] guest clock delta is within tolerance: 101.995372ms
	I0408 12:48:03.565787  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 24.063582343s
	I0408 12:48:03.565806  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.566106  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:03.569409  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.569776  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.569814  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.570017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570577  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570831  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570952  433439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:48:03.571021  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.571121  433439 ssh_runner.go:195] Run: cat /version.json
	I0408 12:48:03.571146  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.573939  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574167  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574300  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574333  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574469  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574594  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574621  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574674  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.574757  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574871  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.574957  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.575130  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.575441  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.575590  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.695930  433439 ssh_runner.go:195] Run: systemctl --version
	I0408 12:48:03.702915  433439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:48:03.853737  433439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:48:03.860218  433439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:48:03.860287  433439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:48:03.877827  433439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:48:03.877861  433439 start.go:494] detecting cgroup driver to use...
	I0408 12:48:03.877943  433439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:48:03.897232  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:48:03.913028  433439 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:48:03.913112  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:48:03.929574  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:48:03.946880  433439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:48:04.083524  433439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:48:04.243842  433439 docker.go:233] disabling docker service ...
	I0408 12:48:04.243938  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:48:04.260459  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:48:04.276119  433439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:48:04.428999  433439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:48:04.571431  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:48:04.589661  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:48:04.612872  433439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:48:04.612954  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.625841  433439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:48:04.625939  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.638868  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.652106  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.664883  433439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:48:04.678149  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.691069  433439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.711329  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.725917  433439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:48:04.738875  433439 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:48:04.738941  433439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:48:04.756784  433439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:48:04.769852  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:04.895658  433439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:48:05.056165  433439 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:48:05.056270  433439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:48:05.061838  433439 start.go:562] Will wait 60s for crictl version
	I0408 12:48:05.061918  433439 ssh_runner.go:195] Run: which crictl
	I0408 12:48:05.066280  433439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:48:05.110966  433439 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:48:05.111084  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.142272  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.176138  433439 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:48:00.606508  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:03.107018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:05.109926  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:02.220836  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:04.718465  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:01.928628  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.427857  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.927917  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.428824  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.428084  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.928751  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.428193  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.927854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.427836  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.177382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:05.180028  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180334  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:05.180363  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180635  433439 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 12:48:05.185436  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:05.199001  433439 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:48:05.199130  433439 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:48:05.199174  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:05.239255  433439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:48:05.239358  433439 ssh_runner.go:195] Run: which lz4
	I0408 12:48:05.244115  433439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:48:05.249135  433439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:48:05.249169  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:48:07.606284  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.607161  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.720025  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.219059  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.928222  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.427868  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.927863  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.428510  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.928662  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.427932  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.928613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.928934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.428085  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.889921  433439 crio.go:462] duration metric: took 1.645848876s to copy over tarball
	I0408 12:48:06.890006  433439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:48:09.403589  433439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513555281s)
	I0408 12:48:09.403620  433439 crio.go:469] duration metric: took 2.513669951s to extract the tarball
	I0408 12:48:09.403627  433439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:48:09.446487  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:09.494576  433439 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:48:09.494606  433439 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:48:09.494614  433439 kubeadm.go:928] updating node { 192.168.50.7 8444 v1.29.3 crio true true} ...
	I0408 12:48:09.494822  433439 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-527454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:48:09.494917  433439 ssh_runner.go:195] Run: crio config
	I0408 12:48:09.541809  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:09.541839  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:09.541859  433439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:48:09.541887  433439 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-527454 NodeName:default-k8s-diff-port-527454 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:48:09.542105  433439 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-527454"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:48:09.542201  433439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:48:09.553494  433439 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:48:09.553591  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:48:09.564970  433439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0408 12:48:09.584888  433439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:48:09.604538  433439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0408 12:48:09.623993  433439 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0408 12:48:09.628368  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:09.642170  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:09.789791  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:48:09.808943  433439 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454 for IP: 192.168.50.7
	I0408 12:48:09.808972  433439 certs.go:194] generating shared ca certs ...
	I0408 12:48:09.808995  433439 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:48:09.809194  433439 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:48:09.809242  433439 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:48:09.809253  433439 certs.go:256] generating profile certs ...
	I0408 12:48:09.809344  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/client.key
	I0408 12:48:09.809415  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key.ad1d04eb
	I0408 12:48:09.809457  433439 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key
	I0408 12:48:09.809645  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:48:09.809699  433439 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:48:09.809713  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:48:09.809742  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:48:09.809764  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:48:09.809792  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:48:09.809851  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:09.810516  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:48:09.866085  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:48:09.899718  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:48:09.941704  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:48:09.976180  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0408 12:48:10.014420  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:48:10.044380  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:48:10.072034  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:48:10.099417  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:48:10.126143  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:48:10.154244  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:48:10.183954  433439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:48:10.207277  433439 ssh_runner.go:195] Run: openssl version
	I0408 12:48:10.213691  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:48:10.228406  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233736  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233798  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.240236  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:48:10.253382  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:48:10.267783  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273234  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273318  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.279925  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:48:10.292710  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:48:10.305381  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310629  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310703  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.317063  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:48:10.330320  433439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:48:10.336138  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:48:10.343341  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:48:10.350536  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:48:10.357665  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:48:10.364925  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:48:10.372314  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:48:10.380001  433439 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:48:10.380107  433439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:48:10.380174  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.425378  433439 cri.go:89] found id: ""
	I0408 12:48:10.425475  433439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:48:10.438972  433439 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:48:10.439000  433439 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:48:10.439005  433439 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:48:10.439051  433439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:48:10.452072  433439 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:48:10.453410  433439 kubeconfig.go:125] found "default-k8s-diff-port-527454" server: "https://192.168.50.7:8444"
	I0408 12:48:10.456022  433439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:48:10.469116  433439 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0408 12:48:10.469171  433439 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:48:10.469188  433439 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:48:10.469256  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.517874  433439 cri.go:89] found id: ""
	I0408 12:48:10.517969  433439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:48:10.538088  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:48:10.551560  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:48:10.551580  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:48:10.551636  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:48:10.564123  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:48:10.564209  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:48:10.578691  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:48:10.590692  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:48:10.590765  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:48:10.602902  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.616831  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:48:10.616922  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.629213  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:48:10.641625  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:48:10.641709  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:48:10.653162  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:48:10.665261  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:10.811712  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.107002  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.606976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:12.188805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.221750  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:11.928656  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.427975  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.927923  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.428494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.928608  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.427852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.927874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.427855  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:16.427929  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.901885  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.09013292s)
	I0408 12:48:11.975836  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.237051  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.329550  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.460345  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:48:12.460457  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.961443  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.460681  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.520828  433439 api_server.go:72] duration metric: took 1.060470201s to wait for apiserver process to appear ...
	I0408 12:48:13.520866  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:48:13.520899  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:13.521407  433439 api_server.go:269] stopped: https://192.168.50.7:8444/healthz: Get "https://192.168.50.7:8444/healthz": dial tcp 192.168.50.7:8444: connect: connection refused
	I0408 12:48:14.022007  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.564485  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.564526  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:16.564543  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.617870  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.617904  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:17.021290  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.026545  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.026578  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:17.521124  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.529552  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.529596  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:18.021125  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:18.037000  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:48:18.049656  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:48:18.049699  433439 api_server.go:131] duration metric: took 4.528823991s to wait for apiserver health ...
	I0408 12:48:18.049722  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:18.049730  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:18.051495  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:48:16.607222  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:18.607837  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.717612  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:19.217050  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.928269  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.427867  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.428658  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.928649  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.428746  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.928734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.427874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.927842  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:21.427823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.052916  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:48:18.072115  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:48:18.111408  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:48:18.130585  433439 system_pods.go:59] 8 kube-system pods found
	I0408 12:48:18.130629  433439 system_pods.go:61] "coredns-76f75df574-r99kj" [171e271b-eec6-4238-afb1-82a2f228c225] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:48:18.130641  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [7019f1eb-58ef-4b1f-acf3-ed3c1ed84623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:48:18.130651  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [80ccd16d-d883-4c92-bb13-abe2d412532c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:48:18.130661  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [78d513aa-1f24-42c0-bfb9-4c20fdee63f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:48:18.130669  433439 system_pods.go:61] "kube-proxy-ztmmc" [de09a26e-cd95-401a-b575-977fcd660c47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 12:48:18.130683  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [eac4d549-1763-45b8-be11-b3b9e83f5110] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:48:18.130702  433439 system_pods.go:61] "metrics-server-57f55c9bc5-44qbm" [52631fc6-84d0-443b-ba42-de35a65db0f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:48:18.130714  433439 system_pods.go:61] "storage-provisioner" [82e8b0d0-6c22-4644-8bd1-b48887b0fe82] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 12:48:18.130730  433439 system_pods.go:74] duration metric: took 19.293309ms to wait for pod list to return data ...
	I0408 12:48:18.130745  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:48:18.135625  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:48:18.135663  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:48:18.135679  433439 node_conditions.go:105] duration metric: took 4.924641ms to run NodePressure ...
	I0408 12:48:18.135724  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:18.416272  433439 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424302  433439 kubeadm.go:733] kubelet initialised
	I0408 12:48:18.424325  433439 kubeadm.go:734] duration metric: took 8.015642ms waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424342  433439 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:48:18.436706  433439 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.447063  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447102  433439 pod_ready.go:81] duration metric: took 10.361708ms for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.447116  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447126  433439 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.460464  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460496  433439 pod_ready.go:81] duration metric: took 13.357612ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.460513  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460523  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.469991  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470035  433439 pod_ready.go:81] duration metric: took 9.502493ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.470072  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470083  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.516886  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516920  433439 pod_ready.go:81] duration metric: took 46.823396ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.516933  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516940  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915101  433439 pod_ready.go:92] pod "kube-proxy-ztmmc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:18.915131  433439 pod_ready.go:81] duration metric: took 398.182437ms for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915144  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:20.922456  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.107083  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.108249  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.219995  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.718091  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.928654  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.428887  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.928103  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.428482  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.928236  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.428613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.928054  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.428566  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.927852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.428729  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.922607  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:24.922155  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:24.922185  433439 pod_ready.go:81] duration metric: took 6.007031338s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:24.922200  433439 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:25.607653  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.216429  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.218553  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.717516  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.427853  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.928281  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.428354  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.928419  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.427934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.427840  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.931412  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:29.430930  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.608369  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:33.107424  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:32.717551  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.216256  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:31.928618  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.928067  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.428776  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.928583  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.428774  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.928033  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.428825  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.928696  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.428311  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.931958  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:34.430950  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.607018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.607820  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:40.106361  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.217721  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:39.218016  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:36.928915  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.427831  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.928429  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.428001  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.927802  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.427845  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.928013  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:41.428569  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.929987  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:38.931900  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.429986  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:42.605609  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:44.606744  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.717196  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:43.718405  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.428794  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.927856  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.428217  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.928796  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.428756  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.927829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.428563  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.927812  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:46.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.430411  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:45.932993  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.607058  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.607716  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.216568  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.218325  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.718153  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.928607  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.427829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.928499  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.428241  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.928393  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.428488  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.927941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.428003  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.928815  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:50.928888  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:50.970680  433881 cri.go:89] found id: ""
	I0408 12:48:50.970713  433881 logs.go:276] 0 containers: []
	W0408 12:48:50.970725  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:50.970733  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:50.970799  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:51.009804  433881 cri.go:89] found id: ""
	I0408 12:48:51.009838  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.009848  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:51.009854  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:51.009909  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:51.049581  433881 cri.go:89] found id: ""
	I0408 12:48:51.049617  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.049626  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:51.049633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:51.049706  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:51.086286  433881 cri.go:89] found id: ""
	I0408 12:48:51.086314  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.086323  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:51.086329  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:51.086395  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:51.126888  433881 cri.go:89] found id: ""
	I0408 12:48:51.126916  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.126927  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:51.126935  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:51.126998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:51.168650  433881 cri.go:89] found id: ""
	I0408 12:48:51.168684  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.168695  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:51.168702  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:51.168759  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:51.205661  433881 cri.go:89] found id: ""
	I0408 12:48:51.205693  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.205706  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:51.205714  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:51.205782  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:51.245659  433881 cri.go:89] found id: ""
	I0408 12:48:51.245699  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.245711  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:51.245725  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:51.245742  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:51.310079  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:51.310120  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:51.354093  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:51.354124  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:51.405031  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:51.405074  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:51.421147  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:51.421183  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:51.547658  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:48.430488  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.432250  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:51.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.606447  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.217434  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:55.717265  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.047880  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:54.062872  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:54.062960  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:54.109041  433881 cri.go:89] found id: ""
	I0408 12:48:54.109068  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.109079  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:54.109087  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:54.109209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:54.150194  433881 cri.go:89] found id: ""
	I0408 12:48:54.150223  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.150231  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:54.150237  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:54.150292  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:54.191735  433881 cri.go:89] found id: ""
	I0408 12:48:54.191767  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.191785  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:54.191792  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:54.191872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:54.251766  433881 cri.go:89] found id: ""
	I0408 12:48:54.251798  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.251807  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:54.251813  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:54.251878  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:54.292179  433881 cri.go:89] found id: ""
	I0408 12:48:54.292215  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.292229  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:54.292237  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:54.292311  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:54.329338  433881 cri.go:89] found id: ""
	I0408 12:48:54.329368  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.329380  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:54.329389  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:54.329458  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:54.377094  433881 cri.go:89] found id: ""
	I0408 12:48:54.377132  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.377144  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:54.377153  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:54.377227  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:54.415835  433881 cri.go:89] found id: ""
	I0408 12:48:54.415865  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.415873  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:54.415884  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:54.415896  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:54.471985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:54.472040  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:54.487674  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:54.487727  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:54.575138  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:54.575161  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:54.575176  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:54.647315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:54.647364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:52.928902  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.931253  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:56.106505  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.108187  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.218754  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.718600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:57.189969  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:57.204122  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:57.204201  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:57.241210  433881 cri.go:89] found id: ""
	I0408 12:48:57.241243  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.241252  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:57.241258  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:57.241310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:57.279553  433881 cri.go:89] found id: ""
	I0408 12:48:57.279591  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.279600  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:57.279606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:57.279658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:57.323516  433881 cri.go:89] found id: ""
	I0408 12:48:57.323560  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.323585  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:57.323593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:57.323663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:57.363723  433881 cri.go:89] found id: ""
	I0408 12:48:57.363755  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.363766  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:57.363772  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:57.363839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:57.400144  433881 cri.go:89] found id: ""
	I0408 12:48:57.400178  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.400190  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:57.400208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:57.400274  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:57.441875  433881 cri.go:89] found id: ""
	I0408 12:48:57.441907  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.441919  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:57.441928  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:57.441999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:57.478024  433881 cri.go:89] found id: ""
	I0408 12:48:57.478057  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.478066  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:57.478074  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:57.478144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:57.516602  433881 cri.go:89] found id: ""
	I0408 12:48:57.516633  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.516642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:57.516652  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:57.516666  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:57.573832  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:57.573883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:57.590751  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:57.590793  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:57.670650  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:57.670679  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:57.670698  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:57.746440  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:57.746488  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:00.291359  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:00.306024  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:00.306116  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:00.352262  433881 cri.go:89] found id: ""
	I0408 12:49:00.352294  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.352305  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:00.352314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:00.352390  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:00.392371  433881 cri.go:89] found id: ""
	I0408 12:49:00.392403  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.392415  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:00.392423  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:00.392488  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:00.434848  433881 cri.go:89] found id: ""
	I0408 12:49:00.434876  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.434885  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:00.434892  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:00.434951  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:00.476998  433881 cri.go:89] found id: ""
	I0408 12:49:00.477032  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.477045  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:00.477054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:00.477128  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:00.514520  433881 cri.go:89] found id: ""
	I0408 12:49:00.514560  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.514569  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:00.514575  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:00.514643  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:00.555942  433881 cri.go:89] found id: ""
	I0408 12:49:00.555981  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.555996  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:00.556005  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:00.556074  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:00.603944  433881 cri.go:89] found id: ""
	I0408 12:49:00.604053  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.604079  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:00.604097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:00.604193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:00.660591  433881 cri.go:89] found id: ""
	I0408 12:49:00.660628  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.660642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:00.660655  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:00.660677  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:00.731774  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:00.731821  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:00.747891  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:00.747947  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:00.827051  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:00.827085  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:00.827100  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:00.907231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:00.907280  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:57.431032  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:59.930470  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.608450  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.106647  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.218064  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.460014  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:03.474615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:03.474716  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:03.513072  433881 cri.go:89] found id: ""
	I0408 12:49:03.513106  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.513115  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:03.513122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:03.513179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:03.549307  433881 cri.go:89] found id: ""
	I0408 12:49:03.549349  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.549358  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:03.549364  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:03.549508  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:03.587463  433881 cri.go:89] found id: ""
	I0408 12:49:03.587503  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.587516  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:03.587524  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:03.587601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:03.628171  433881 cri.go:89] found id: ""
	I0408 12:49:03.628202  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.628211  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:03.628217  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:03.628284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:03.663630  433881 cri.go:89] found id: ""
	I0408 12:49:03.663661  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.663672  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:03.663680  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:03.663762  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:03.704078  433881 cri.go:89] found id: ""
	I0408 12:49:03.704112  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.704124  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:03.704134  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:03.704202  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:03.744820  433881 cri.go:89] found id: ""
	I0408 12:49:03.744856  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.744868  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:03.744877  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:03.744945  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:03.785826  433881 cri.go:89] found id: ""
	I0408 12:49:03.785855  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.785868  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:03.785878  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:03.785905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:03.800987  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:03.801019  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:03.882870  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:03.882905  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:03.882924  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:03.967335  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:03.967382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:04.008319  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:04.008348  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:06.562156  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:06.579058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:06.579137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:01.933210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:04.428894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.428974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.606895  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:08.106819  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:07.718023  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.635302  433881 cri.go:89] found id: ""
	I0408 12:49:06.635333  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.635345  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:06.635353  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:06.635422  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:06.696626  433881 cri.go:89] found id: ""
	I0408 12:49:06.696675  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.696692  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:06.696700  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:06.696769  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:06.738555  433881 cri.go:89] found id: ""
	I0408 12:49:06.738589  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.738601  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:06.738610  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:06.738675  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:06.780471  433881 cri.go:89] found id: ""
	I0408 12:49:06.780507  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.780516  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:06.780522  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:06.780587  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:06.823514  433881 cri.go:89] found id: ""
	I0408 12:49:06.823558  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.823571  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:06.823580  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:06.823671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:06.863990  433881 cri.go:89] found id: ""
	I0408 12:49:06.864029  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.864045  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:06.864055  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:06.864123  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:06.905383  433881 cri.go:89] found id: ""
	I0408 12:49:06.905419  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.905432  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:06.905440  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:06.905510  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:06.947761  433881 cri.go:89] found id: ""
	I0408 12:49:06.947792  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.947805  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:06.947814  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:06.947826  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:06.988895  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:06.988930  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:07.043205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:07.043251  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:07.057788  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:07.057823  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:07.137854  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:07.137884  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:07.137903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:09.724678  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:09.739337  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:09.739408  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:09.777803  433881 cri.go:89] found id: ""
	I0408 12:49:09.777837  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.777848  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:09.777857  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:09.777934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:09.818101  433881 cri.go:89] found id: ""
	I0408 12:49:09.818132  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.818144  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:09.818152  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:09.818220  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:09.860148  433881 cri.go:89] found id: ""
	I0408 12:49:09.860186  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.860211  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:09.860218  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:09.860284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:09.899008  433881 cri.go:89] found id: ""
	I0408 12:49:09.899042  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.899054  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:09.899063  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:09.899130  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:09.938235  433881 cri.go:89] found id: ""
	I0408 12:49:09.938270  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.938281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:09.938290  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:09.938361  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:09.977404  433881 cri.go:89] found id: ""
	I0408 12:49:09.977438  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.977447  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:09.977454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:09.977505  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:10.015959  433881 cri.go:89] found id: ""
	I0408 12:49:10.015992  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.016008  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:10.016015  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:10.016083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:10.055723  433881 cri.go:89] found id: ""
	I0408 12:49:10.055753  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.055762  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:10.055771  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:10.055785  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:10.131028  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:10.131061  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:10.131079  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:10.213484  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:10.213528  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:10.261403  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:10.261554  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:10.316130  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:10.316189  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:08.429894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.930925  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.609607  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:13.106296  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.716182  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.717779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.832344  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:12.846324  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:12.846446  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:12.883721  433881 cri.go:89] found id: ""
	I0408 12:49:12.883761  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.883776  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:12.883784  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:12.883850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:12.922869  433881 cri.go:89] found id: ""
	I0408 12:49:12.922903  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.922914  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:12.922923  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:12.922989  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:12.965672  433881 cri.go:89] found id: ""
	I0408 12:49:12.965711  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.965723  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:12.965731  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:12.965804  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:13.005430  433881 cri.go:89] found id: ""
	I0408 12:49:13.005466  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.005479  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:13.005494  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:13.005556  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:13.047068  433881 cri.go:89] found id: ""
	I0408 12:49:13.047095  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.047103  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:13.047110  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:13.047175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:13.085014  433881 cri.go:89] found id: ""
	I0408 12:49:13.085047  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.085058  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:13.085067  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:13.085134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:13.122582  433881 cri.go:89] found id: ""
	I0408 12:49:13.122621  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.122633  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:13.122643  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:13.122707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:13.159159  433881 cri.go:89] found id: ""
	I0408 12:49:13.159190  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.159199  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:13.159209  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:13.159221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:13.211508  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:13.211553  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:13.228228  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:13.228265  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:13.306379  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:13.306419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:13.306437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:13.383403  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:13.383462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:15.933673  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:15.947963  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:15.948039  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:15.988497  433881 cri.go:89] found id: ""
	I0408 12:49:15.988526  433881 logs.go:276] 0 containers: []
	W0408 12:49:15.988534  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:15.988541  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:15.988605  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:16.026695  433881 cri.go:89] found id: ""
	I0408 12:49:16.026733  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.026758  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:16.026766  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:16.026850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:16.072415  433881 cri.go:89] found id: ""
	I0408 12:49:16.072452  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.072487  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:16.072498  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:16.072576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:16.111534  433881 cri.go:89] found id: ""
	I0408 12:49:16.111563  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.111575  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:16.111583  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:16.111653  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:16.151515  433881 cri.go:89] found id: ""
	I0408 12:49:16.151550  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.151562  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:16.151572  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:16.151640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:16.189055  433881 cri.go:89] found id: ""
	I0408 12:49:16.189085  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.189094  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:16.189101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:16.189153  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:16.226759  433881 cri.go:89] found id: ""
	I0408 12:49:16.226790  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.226800  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:16.226807  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:16.226860  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:16.269035  433881 cri.go:89] found id: ""
	I0408 12:49:16.269068  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.269079  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:16.269092  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:16.269110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:16.322426  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:16.322472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:16.337670  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:16.337704  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:16.422746  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:16.422777  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:16.422795  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:16.508089  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:16.508140  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:12.931911  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.933011  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:15.607174  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:18.106346  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:17.216822  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.216874  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.055162  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:19.069970  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:19.070044  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:19.110031  433881 cri.go:89] found id: ""
	I0408 12:49:19.110062  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.110070  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:19.110077  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:19.110125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:19.150644  433881 cri.go:89] found id: ""
	I0408 12:49:19.150681  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.150693  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:19.150702  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:19.150770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:19.193032  433881 cri.go:89] found id: ""
	I0408 12:49:19.193064  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.193076  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:19.193084  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:19.193157  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:19.230634  433881 cri.go:89] found id: ""
	I0408 12:49:19.230661  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.230670  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:19.230676  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:19.230727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:19.269083  433881 cri.go:89] found id: ""
	I0408 12:49:19.269116  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.269125  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:19.269132  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:19.269183  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:19.309072  433881 cri.go:89] found id: ""
	I0408 12:49:19.309105  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.309117  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:19.309126  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:19.309208  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:19.349582  433881 cri.go:89] found id: ""
	I0408 12:49:19.349613  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.349622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:19.349633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:19.349687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:19.388015  433881 cri.go:89] found id: ""
	I0408 12:49:19.388046  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.388053  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:19.388062  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:19.388076  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:19.469726  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:19.469750  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:19.469766  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:19.551098  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:19.551138  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:19.595343  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:19.595377  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:19.655983  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:19.656031  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:17.429653  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.432135  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:20.609415  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.105576  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:25.106666  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:21.217932  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.720613  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:22.172109  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:22.187123  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:22.187197  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:22.227242  433881 cri.go:89] found id: ""
	I0408 12:49:22.227269  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.227277  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:22.227283  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:22.227344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:22.266238  433881 cri.go:89] found id: ""
	I0408 12:49:22.266270  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.266279  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:22.266285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:22.266345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:22.304245  433881 cri.go:89] found id: ""
	I0408 12:49:22.304273  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.304281  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:22.304288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:22.304344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:22.348994  433881 cri.go:89] found id: ""
	I0408 12:49:22.349035  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.349048  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:22.349058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:22.349134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:22.389590  433881 cri.go:89] found id: ""
	I0408 12:49:22.389622  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.389631  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:22.389638  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:22.389708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:22.425775  433881 cri.go:89] found id: ""
	I0408 12:49:22.425809  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.425821  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:22.425830  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:22.425898  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:22.468155  433881 cri.go:89] found id: ""
	I0408 12:49:22.468184  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.468192  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:22.468198  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:22.468250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:22.507866  433881 cri.go:89] found id: ""
	I0408 12:49:22.507906  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.507915  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:22.507934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:22.507953  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:22.559847  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:22.559893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:22.575153  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:22.575188  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:22.656324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:22.656354  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:22.656372  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:22.737542  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:22.737589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.282655  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:25.296701  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:25.296770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:25.337101  433881 cri.go:89] found id: ""
	I0408 12:49:25.337141  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.337152  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:25.337161  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:25.337228  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:25.376383  433881 cri.go:89] found id: ""
	I0408 12:49:25.376453  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.376467  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:25.376481  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:25.376576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:25.415819  433881 cri.go:89] found id: ""
	I0408 12:49:25.415852  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.415865  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:25.415873  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:25.415941  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:25.457500  433881 cri.go:89] found id: ""
	I0408 12:49:25.457549  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.457560  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:25.457568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:25.457652  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:25.497132  433881 cri.go:89] found id: ""
	I0408 12:49:25.497172  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.497185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:25.497194  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:25.497265  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:25.542721  433881 cri.go:89] found id: ""
	I0408 12:49:25.542754  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.542765  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:25.542773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:25.542842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:25.583815  433881 cri.go:89] found id: ""
	I0408 12:49:25.583858  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.583869  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:25.583876  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:25.583931  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:25.623484  433881 cri.go:89] found id: ""
	I0408 12:49:25.623519  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.623530  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:25.623544  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:25.623562  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.674250  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:25.674286  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:25.735433  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:25.735477  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:25.750760  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:25.750792  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:25.830122  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:25.830158  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:25.830192  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:21.929027  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.933879  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.429452  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:27.106798  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:29.605690  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.216525  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.216788  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.217600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.418059  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:28.434568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:28.434627  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.479914  433881 cri.go:89] found id: ""
	I0408 12:49:28.479956  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.479968  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:28.479977  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:28.480052  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:28.526249  433881 cri.go:89] found id: ""
	I0408 12:49:28.526282  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.526305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:28.526314  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:28.526403  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:28.564561  433881 cri.go:89] found id: ""
	I0408 12:49:28.564595  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.564606  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:28.564613  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:28.564666  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:28.606416  433881 cri.go:89] found id: ""
	I0408 12:49:28.606456  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.606469  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:28.606478  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:28.606545  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:28.649847  433881 cri.go:89] found id: ""
	I0408 12:49:28.649880  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.649915  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:28.649925  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:28.650014  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:28.690548  433881 cri.go:89] found id: ""
	I0408 12:49:28.690587  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.690600  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:28.690609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:28.690681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:28.730123  433881 cri.go:89] found id: ""
	I0408 12:49:28.730159  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.730170  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:28.730179  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:28.730250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:28.771147  433881 cri.go:89] found id: ""
	I0408 12:49:28.771192  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.771205  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:28.771220  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:28.771238  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:28.856250  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:28.856273  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:28.856301  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:28.941925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:28.941982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:29.003853  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:29.003893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:29.057957  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:29.058004  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.573734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:31.588485  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:31.588551  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.433974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.930607  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.606729  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.107220  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:32.218719  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.718165  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.625072  433881 cri.go:89] found id: ""
	I0408 12:49:31.625100  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.625108  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:31.625114  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:31.625175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:31.662716  433881 cri.go:89] found id: ""
	I0408 12:49:31.662752  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.662763  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:31.662772  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:31.662839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:31.701551  433881 cri.go:89] found id: ""
	I0408 12:49:31.701588  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.701596  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:31.701603  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:31.701687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:31.741857  433881 cri.go:89] found id: ""
	I0408 12:49:31.741888  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.741900  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:31.741908  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:31.741973  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:31.782209  433881 cri.go:89] found id: ""
	I0408 12:49:31.782240  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.782252  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:31.782259  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:31.782347  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:31.820207  433881 cri.go:89] found id: ""
	I0408 12:49:31.820261  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.820283  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:31.820297  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:31.820362  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:31.858445  433881 cri.go:89] found id: ""
	I0408 12:49:31.858482  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.858495  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:31.858504  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:31.858580  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:31.899017  433881 cri.go:89] found id: ""
	I0408 12:49:31.899052  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.899070  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:31.899084  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:31.899102  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:31.956200  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:31.956239  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.971940  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:31.971982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:32.049548  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:32.049578  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:32.049596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:32.136320  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:32.136366  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:34.684997  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:34.700097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:34.700185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:34.757577  433881 cri.go:89] found id: ""
	I0408 12:49:34.757669  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.757686  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:34.757696  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:34.757792  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:34.798151  433881 cri.go:89] found id: ""
	I0408 12:49:34.798188  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.798196  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:34.798203  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:34.798266  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:34.835735  433881 cri.go:89] found id: ""
	I0408 12:49:34.835774  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.835786  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:34.835794  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:34.835862  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:34.875311  433881 cri.go:89] found id: ""
	I0408 12:49:34.875345  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.875359  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:34.875368  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:34.875484  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:34.916118  433881 cri.go:89] found id: ""
	I0408 12:49:34.916148  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.916159  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:34.916167  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:34.916233  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:34.961197  433881 cri.go:89] found id: ""
	I0408 12:49:34.961234  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.961246  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:34.961254  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:34.961314  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:34.999553  433881 cri.go:89] found id: ""
	I0408 12:49:34.999590  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.999598  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:34.999606  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:34.999671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:35.038204  433881 cri.go:89] found id: ""
	I0408 12:49:35.038244  433881 logs.go:276] 0 containers: []
	W0408 12:49:35.038254  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:35.038265  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:35.038277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:35.118925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:35.118982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:35.164584  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:35.164631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:35.216654  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:35.216694  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:35.232506  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:35.232544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:35.304615  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:33.429854  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:35.933211  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:36.605433  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:38.606014  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.217818  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:39.717250  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.805529  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:37.821463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:37.821550  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:37.860644  433881 cri.go:89] found id: ""
	I0408 12:49:37.860683  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.860700  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:37.860709  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:37.860781  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:37.899995  433881 cri.go:89] found id: ""
	I0408 12:49:37.900034  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.900042  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:37.900048  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:37.900111  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:37.939562  433881 cri.go:89] found id: ""
	I0408 12:49:37.939584  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.939592  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:37.939599  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:37.939668  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:37.977990  433881 cri.go:89] found id: ""
	I0408 12:49:37.978021  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.978033  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:37.978042  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:37.978113  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:38.014506  433881 cri.go:89] found id: ""
	I0408 12:49:38.014537  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.014551  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:38.014559  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:38.014639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:38.049714  433881 cri.go:89] found id: ""
	I0408 12:49:38.049751  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.049764  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:38.049773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:38.049842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:38.089931  433881 cri.go:89] found id: ""
	I0408 12:49:38.089978  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.089987  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:38.089993  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:38.090058  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:38.127674  433881 cri.go:89] found id: ""
	I0408 12:49:38.127715  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.127727  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:38.127738  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:38.127759  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.144170  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:38.144203  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:38.225864  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:38.225885  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:38.225899  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:38.309289  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:38.309334  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:38.351666  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:38.351724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:40.910064  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:40.926264  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:40.926350  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:40.973110  433881 cri.go:89] found id: ""
	I0408 12:49:40.973138  433881 logs.go:276] 0 containers: []
	W0408 12:49:40.973146  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:40.973152  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:40.973209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:41.014643  433881 cri.go:89] found id: ""
	I0408 12:49:41.014675  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.014688  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:41.014696  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:41.014761  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:41.054414  433881 cri.go:89] found id: ""
	I0408 12:49:41.054446  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.054461  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:41.054469  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:41.054543  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:41.094835  433881 cri.go:89] found id: ""
	I0408 12:49:41.094867  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.094876  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:41.094883  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:41.094943  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:41.153654  433881 cri.go:89] found id: ""
	I0408 12:49:41.153684  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.153693  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:41.153699  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:41.153751  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:41.196170  433881 cri.go:89] found id: ""
	I0408 12:49:41.196198  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.196209  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:41.196215  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:41.196277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:41.261374  433881 cri.go:89] found id: ""
	I0408 12:49:41.261412  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.261423  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:41.261432  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:41.261500  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:41.300491  433881 cri.go:89] found id: ""
	I0408 12:49:41.300523  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.300532  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:41.300546  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:41.300559  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:41.373813  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:41.373843  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:41.373860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:41.449773  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:41.449819  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:41.498826  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:41.498862  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:41.552736  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:41.552780  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.431584  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:40.930328  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.106567  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:43.606770  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.718244  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.218855  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.068653  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:44.083655  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:44.083756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:44.124068  433881 cri.go:89] found id: ""
	I0408 12:49:44.124101  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.124113  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:44.124122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:44.124193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:44.160898  433881 cri.go:89] found id: ""
	I0408 12:49:44.160936  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.160950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:44.160958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:44.161032  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:44.196503  433881 cri.go:89] found id: ""
	I0408 12:49:44.196532  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.196540  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:44.196547  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:44.196611  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:44.234604  433881 cri.go:89] found id: ""
	I0408 12:49:44.234644  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.234656  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:44.234664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:44.234720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:44.271243  433881 cri.go:89] found id: ""
	I0408 12:49:44.271283  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.271297  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:44.271306  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:44.271369  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:44.308504  433881 cri.go:89] found id: ""
	I0408 12:49:44.308543  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.308571  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:44.308581  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:44.308644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:44.345662  433881 cri.go:89] found id: ""
	I0408 12:49:44.345703  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.345716  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:44.345725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:44.345786  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:44.384785  433881 cri.go:89] found id: ""
	I0408 12:49:44.384816  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.384826  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:44.384845  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:44.384863  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:44.429253  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:44.429283  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:44.485160  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:44.485201  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:44.502996  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:44.503033  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:44.581921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:44.581946  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:44.581964  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:43.428915  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:45.430859  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.106078  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.108320  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.718065  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.721772  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:47.167101  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:47.183406  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:47.183475  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:47.244266  433881 cri.go:89] found id: ""
	I0408 12:49:47.244295  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.244306  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:47.244314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:47.244379  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:47.285538  433881 cri.go:89] found id: ""
	I0408 12:49:47.285575  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.285588  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:47.285597  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:47.285673  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:47.323634  433881 cri.go:89] found id: ""
	I0408 12:49:47.323670  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.323679  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:47.323707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:47.323791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:47.362737  433881 cri.go:89] found id: ""
	I0408 12:49:47.362774  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.362787  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:47.362795  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:47.362856  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:47.403914  433881 cri.go:89] found id: ""
	I0408 12:49:47.403947  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.403958  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:47.403967  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:47.404035  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:47.445470  433881 cri.go:89] found id: ""
	I0408 12:49:47.445506  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.445521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:47.445530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:47.445598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:47.482633  433881 cri.go:89] found id: ""
	I0408 12:49:47.482669  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.482680  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:47.482689  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:47.482760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:47.521404  433881 cri.go:89] found id: ""
	I0408 12:49:47.521441  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.521456  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:47.521469  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:47.521486  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:47.597247  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:47.597270  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:47.597284  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:47.678765  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:47.678805  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.721463  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:47.721502  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:47.780430  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:47.780472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.295320  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:50.312212  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:50.312293  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:50.355987  433881 cri.go:89] found id: ""
	I0408 12:49:50.356022  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.356034  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:50.356043  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:50.356118  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:50.399662  433881 cri.go:89] found id: ""
	I0408 12:49:50.399714  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.399726  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:50.399735  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:50.399798  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:50.441718  433881 cri.go:89] found id: ""
	I0408 12:49:50.441753  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.441764  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:50.441773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:50.441846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:50.485588  433881 cri.go:89] found id: ""
	I0408 12:49:50.485624  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.485634  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:50.485641  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:50.485703  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:50.524897  433881 cri.go:89] found id: ""
	I0408 12:49:50.524929  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.524937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:50.524943  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:50.524998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:50.561337  433881 cri.go:89] found id: ""
	I0408 12:49:50.561378  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.561388  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:50.561396  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:50.561468  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:50.603052  433881 cri.go:89] found id: ""
	I0408 12:49:50.603082  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.603092  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:50.603101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:50.603169  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:50.643514  433881 cri.go:89] found id: ""
	I0408 12:49:50.643555  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.643566  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:50.643576  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:50.643589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:50.697346  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:50.697388  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.711982  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:50.712015  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:50.796665  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:50.796711  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:50.796731  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:50.873396  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:50.873438  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.432167  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:49.929922  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:50.606575  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.106564  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:51.217123  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.217785  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.217941  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.421458  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:53.435909  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:53.435975  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:53.478018  433881 cri.go:89] found id: ""
	I0408 12:49:53.478052  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.478063  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:53.478072  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:53.478138  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:53.518890  433881 cri.go:89] found id: ""
	I0408 12:49:53.518936  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.518950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:53.518958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:53.519047  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:53.554912  433881 cri.go:89] found id: ""
	I0408 12:49:53.554952  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.554964  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:53.554972  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:53.555042  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:53.592991  433881 cri.go:89] found id: ""
	I0408 12:49:53.593019  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.593028  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:53.593033  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:53.593088  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:53.631215  433881 cri.go:89] found id: ""
	I0408 12:49:53.631255  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.631269  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:53.631277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:53.631351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:53.669189  433881 cri.go:89] found id: ""
	I0408 12:49:53.669228  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.669248  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:53.669258  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:53.669322  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:53.709315  433881 cri.go:89] found id: ""
	I0408 12:49:53.709344  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.709353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:53.709359  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:53.709421  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:53.750869  433881 cri.go:89] found id: ""
	I0408 12:49:53.750910  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.750922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:53.750934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:53.750951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:53.802734  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:53.802782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:53.819509  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:53.819546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:53.888733  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:53.888761  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:53.888782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:53.972408  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:53.972448  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:56.517173  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:56.532357  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:56.532427  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:56.574068  433881 cri.go:89] found id: ""
	I0408 12:49:56.574109  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.574118  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:56.574129  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:56.574276  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:52.429230  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:54.929643  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.607214  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:58.109657  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:57.717805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.219041  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:56.616853  433881 cri.go:89] found id: ""
	I0408 12:49:56.616885  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.616906  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:56.616915  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:56.616988  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:56.659097  433881 cri.go:89] found id: ""
	I0408 12:49:56.659125  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.659133  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:56.659139  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:56.659190  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:56.699222  433881 cri.go:89] found id: ""
	I0408 12:49:56.699262  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.699274  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:56.699283  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:56.699345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:56.747017  433881 cri.go:89] found id: ""
	I0408 12:49:56.747055  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.747068  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:56.747076  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:56.747149  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:56.784988  433881 cri.go:89] found id: ""
	I0408 12:49:56.785028  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.785042  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:56.785058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:56.785126  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:56.830280  433881 cri.go:89] found id: ""
	I0408 12:49:56.830320  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.830332  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:56.830340  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:56.830410  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:56.868643  433881 cri.go:89] found id: ""
	I0408 12:49:56.868678  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.868686  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:56.868697  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:56.868713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:56.922497  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:56.922542  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:56.940550  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:56.940596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:57.018640  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:57.018665  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:57.018680  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.096626  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:57.096681  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:59.638585  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:59.652384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:59.652466  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:59.692778  433881 cri.go:89] found id: ""
	I0408 12:49:59.692823  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.692837  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:59.692846  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:59.692906  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:59.732896  433881 cri.go:89] found id: ""
	I0408 12:49:59.732923  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.732933  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:59.732940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:59.732999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:59.774774  433881 cri.go:89] found id: ""
	I0408 12:49:59.774806  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.774814  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:59.774819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:59.774870  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:59.812919  433881 cri.go:89] found id: ""
	I0408 12:49:59.812959  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.812972  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:59.812980  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:59.813043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:59.848653  433881 cri.go:89] found id: ""
	I0408 12:49:59.848684  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.848695  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:59.848703  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:59.848772  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:59.883495  433881 cri.go:89] found id: ""
	I0408 12:49:59.883525  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.883537  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:59.883546  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:59.883625  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:59.925080  433881 cri.go:89] found id: ""
	I0408 12:49:59.925113  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.925122  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:59.925129  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:59.925182  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:59.967101  433881 cri.go:89] found id: ""
	I0408 12:49:59.967130  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.967142  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:59.967152  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:59.967163  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:00.010507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:00.010546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:00.063139  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:00.063182  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:00.079229  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:00.079266  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:00.155202  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:00.155235  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:00.155253  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.430097  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:59.430226  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.605915  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:03.106990  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.717304  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.717757  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.738934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:02.752509  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:02.752593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:02.791178  433881 cri.go:89] found id: ""
	I0408 12:50:02.791212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.791222  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:02.791229  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:02.791301  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:02.834180  433881 cri.go:89] found id: ""
	I0408 12:50:02.834212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.834225  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:02.834234  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:02.834296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:02.873513  433881 cri.go:89] found id: ""
	I0408 12:50:02.873551  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.873563  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:02.873573  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:02.873651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:02.921329  433881 cri.go:89] found id: ""
	I0408 12:50:02.921371  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.921384  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:02.921392  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:02.921517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:02.959940  433881 cri.go:89] found id: ""
	I0408 12:50:02.959970  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.959980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:02.959988  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:02.960120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:03.001222  433881 cri.go:89] found id: ""
	I0408 12:50:03.001251  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.001259  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:03.001265  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:03.001317  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:03.043627  433881 cri.go:89] found id: ""
	I0408 12:50:03.043656  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.043666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:03.043671  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:03.043750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:03.083603  433881 cri.go:89] found id: ""
	I0408 12:50:03.083640  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.083649  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:03.083660  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:03.083674  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:03.138300  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:03.138343  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:03.153439  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:03.153476  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:03.230230  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:03.230258  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:03.230277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:03.312005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:03.312048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:05.851000  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:05.865533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:05.865601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:05.905449  433881 cri.go:89] found id: ""
	I0408 12:50:05.905485  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.905495  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:05.905501  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:05.905570  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:05.952260  433881 cri.go:89] found id: ""
	I0408 12:50:05.952293  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.952305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:05.952313  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:05.952384  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:05.993398  433881 cri.go:89] found id: ""
	I0408 12:50:05.993430  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.993440  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:05.993446  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:05.993512  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:06.031484  433881 cri.go:89] found id: ""
	I0408 12:50:06.031527  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.031539  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:06.031551  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:06.031613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:06.067855  433881 cri.go:89] found id: ""
	I0408 12:50:06.067897  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.067910  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:06.067920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:06.067992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:06.108905  433881 cri.go:89] found id: ""
	I0408 12:50:06.108937  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.108949  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:06.108958  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:06.109010  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:06.147629  433881 cri.go:89] found id: ""
	I0408 12:50:06.147664  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.147674  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:06.147683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:06.147760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:06.184250  433881 cri.go:89] found id: ""
	I0408 12:50:06.184287  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.184298  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:06.184312  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:06.184329  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:06.239560  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:06.239606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:06.254746  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:06.254777  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:06.330423  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:06.330453  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:06.330471  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:06.410965  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:06.411017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:01.930407  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.429884  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:06.430557  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:05.605804  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.606737  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:10.107370  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.218275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:09.716548  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:08.958108  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:08.972557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:08.972626  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:09.026034  433881 cri.go:89] found id: ""
	I0408 12:50:09.026073  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.026081  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:09.026094  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:09.026145  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:09.063360  433881 cri.go:89] found id: ""
	I0408 12:50:09.063399  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.063411  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:09.063420  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:09.063509  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:09.101002  433881 cri.go:89] found id: ""
	I0408 12:50:09.101030  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.101039  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:09.101045  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:09.101104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:09.140794  433881 cri.go:89] found id: ""
	I0408 12:50:09.140830  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.140843  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:09.140852  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:09.140912  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:09.176889  433881 cri.go:89] found id: ""
	I0408 12:50:09.176927  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.176939  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:09.176947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:09.177013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:09.218687  433881 cri.go:89] found id: ""
	I0408 12:50:09.218719  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.218730  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:09.218739  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:09.218819  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:09.254509  433881 cri.go:89] found id: ""
	I0408 12:50:09.254542  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.254551  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:09.254557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:09.254619  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:09.291313  433881 cri.go:89] found id: ""
	I0408 12:50:09.291341  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.291349  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:09.291359  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:09.291382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:09.342578  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:09.342625  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:09.359207  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:09.359236  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:09.434921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:09.434945  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:09.434962  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:09.526672  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:09.526726  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:08.930029  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.429317  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.107556  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:14.606578  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.717001  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:13.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.719875  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.075428  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:12.089920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:12.089986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:12.128791  433881 cri.go:89] found id: ""
	I0408 12:50:12.128878  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.128895  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:12.128905  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:12.128979  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:12.166911  433881 cri.go:89] found id: ""
	I0408 12:50:12.166939  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.166947  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:12.166954  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:12.167005  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:12.205798  433881 cri.go:89] found id: ""
	I0408 12:50:12.205830  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.205839  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:12.205847  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:12.205905  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:12.242716  433881 cri.go:89] found id: ""
	I0408 12:50:12.242754  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.242764  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:12.242771  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:12.242825  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:12.279061  433881 cri.go:89] found id: ""
	I0408 12:50:12.279098  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.279109  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:12.279118  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:12.279187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:12.319510  433881 cri.go:89] found id: ""
	I0408 12:50:12.319538  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.319547  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:12.319554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:12.319610  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:12.357578  433881 cri.go:89] found id: ""
	I0408 12:50:12.357613  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.357625  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:12.357634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:12.357699  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:12.402895  433881 cri.go:89] found id: ""
	I0408 12:50:12.402931  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.402944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:12.402958  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:12.402975  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:12.455885  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:12.455929  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:12.472119  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:12.472160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:12.551019  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:12.551041  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:12.551054  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:12.633560  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:12.633606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.176459  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:15.191013  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:15.191083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:15.243825  433881 cri.go:89] found id: ""
	I0408 12:50:15.243852  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.243861  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:15.243867  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:15.243918  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:15.282768  433881 cri.go:89] found id: ""
	I0408 12:50:15.282803  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.282816  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:15.282824  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:15.282893  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:15.318418  433881 cri.go:89] found id: ""
	I0408 12:50:15.318447  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.318455  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:15.318463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:15.318540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:15.354071  433881 cri.go:89] found id: ""
	I0408 12:50:15.354109  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.354125  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:15.354133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:15.354205  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:15.397142  433881 cri.go:89] found id: ""
	I0408 12:50:15.397176  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.397185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:15.397191  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:15.397253  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:15.436798  433881 cri.go:89] found id: ""
	I0408 12:50:15.436832  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.436843  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:15.436851  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:15.436916  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:15.475792  433881 cri.go:89] found id: ""
	I0408 12:50:15.475823  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.475836  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:15.475844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:15.475917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:15.526277  433881 cri.go:89] found id: ""
	I0408 12:50:15.526323  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.526335  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:15.526348  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:15.526365  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:15.601590  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:15.601616  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:15.601631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:15.681784  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:15.681842  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.725300  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:15.725345  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:15.778579  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:15.778627  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:13.429712  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.430255  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:17.106153  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:19.607656  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.217812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.719543  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.296690  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:18.310554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:18.310623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:18.350635  433881 cri.go:89] found id: ""
	I0408 12:50:18.350673  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.350685  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:18.350693  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:18.350756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:18.391943  433881 cri.go:89] found id: ""
	I0408 12:50:18.391974  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.391984  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:18.391990  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:18.392059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:18.433191  433881 cri.go:89] found id: ""
	I0408 12:50:18.433226  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.433237  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:18.433246  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:18.433310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:18.471600  433881 cri.go:89] found id: ""
	I0408 12:50:18.471629  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.471641  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:18.471649  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:18.471737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:18.507180  433881 cri.go:89] found id: ""
	I0408 12:50:18.507219  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.507228  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:18.507242  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:18.507307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:18.553894  433881 cri.go:89] found id: ""
	I0408 12:50:18.553924  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.553939  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:18.553947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:18.554013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:18.593823  433881 cri.go:89] found id: ""
	I0408 12:50:18.593860  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.593870  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:18.593878  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:18.593934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:18.636636  433881 cri.go:89] found id: ""
	I0408 12:50:18.636667  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.636679  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:18.636692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:18.636709  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:18.690597  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:18.690640  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:18.706484  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:18.706537  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:18.795390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:18.795419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:18.795434  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:18.873458  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:18.873518  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:21.420942  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:21.436200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:21.436262  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:21.473194  433881 cri.go:89] found id: ""
	I0408 12:50:21.473228  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.473237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:21.473244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:21.473297  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:21.510496  433881 cri.go:89] found id: ""
	I0408 12:50:21.510534  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.510547  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:21.510556  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:21.510618  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:21.550290  433881 cri.go:89] found id: ""
	I0408 12:50:21.550329  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.550337  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:21.550344  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:21.550399  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:21.586192  433881 cri.go:89] found id: ""
	I0408 12:50:21.586229  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.586241  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:21.586252  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:21.586316  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:17.930126  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.430210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:22.107118  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.107812  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:23.217266  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:25.218476  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:21.645888  433881 cri.go:89] found id: ""
	I0408 12:50:21.645925  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.645937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:21.645945  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:21.646012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:21.710384  433881 cri.go:89] found id: ""
	I0408 12:50:21.710416  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.710429  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:21.710437  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:21.710503  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:21.773423  433881 cri.go:89] found id: ""
	I0408 12:50:21.773458  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.773467  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:21.773473  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:21.773536  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:21.814353  433881 cri.go:89] found id: ""
	I0408 12:50:21.814389  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.814401  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:21.814415  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:21.814437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:21.866744  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:21.866783  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:21.883577  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:21.883617  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:21.963339  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:21.963362  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:21.963379  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:22.044959  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:22.045017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:24.589027  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:24.603707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:24.603797  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:24.648525  433881 cri.go:89] found id: ""
	I0408 12:50:24.648566  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.648579  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:24.648587  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:24.648656  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:24.693788  433881 cri.go:89] found id: ""
	I0408 12:50:24.693827  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.693840  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:24.693850  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:24.693925  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:24.734461  433881 cri.go:89] found id: ""
	I0408 12:50:24.734499  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.734507  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:24.734514  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:24.734578  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:24.781723  433881 cri.go:89] found id: ""
	I0408 12:50:24.781759  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.781772  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:24.781780  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:24.781850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:24.823060  433881 cri.go:89] found id: ""
	I0408 12:50:24.823091  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.823101  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:24.823109  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:24.823195  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:24.858847  433881 cri.go:89] found id: ""
	I0408 12:50:24.858887  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.858899  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:24.858913  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:24.858968  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:24.899075  433881 cri.go:89] found id: ""
	I0408 12:50:24.899113  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.899125  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:24.899133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:24.899216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:24.941839  433881 cri.go:89] found id: ""
	I0408 12:50:24.941876  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.941886  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:24.941897  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:24.941911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:24.993358  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:24.993402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:25.010857  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:25.010892  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:25.098985  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:25.099017  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:25.099035  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:25.179115  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:25.179172  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:22.928804  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.930608  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:26.607216  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:28.608092  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.717812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:30.218079  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.726080  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:27.740646  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:27.740739  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:27.781567  433881 cri.go:89] found id: ""
	I0408 12:50:27.781612  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.781623  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:27.781630  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:27.781696  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:27.823034  433881 cri.go:89] found id: ""
	I0408 12:50:27.823077  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.823090  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:27.823099  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:27.823174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:27.862738  433881 cri.go:89] found id: ""
	I0408 12:50:27.862797  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.862822  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:27.862832  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:27.862917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:27.905821  433881 cri.go:89] found id: ""
	I0408 12:50:27.905862  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.905874  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:27.905884  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:27.905954  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:27.949580  433881 cri.go:89] found id: ""
	I0408 12:50:27.949613  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.949625  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:27.949634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:27.949721  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:27.989453  433881 cri.go:89] found id: ""
	I0408 12:50:27.989488  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.989496  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:27.989502  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:27.989560  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:28.031983  433881 cri.go:89] found id: ""
	I0408 12:50:28.032015  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.032027  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:28.032035  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:28.032114  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:28.072851  433881 cri.go:89] found id: ""
	I0408 12:50:28.072884  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.072895  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:28.072910  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:28.072927  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:28.116117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:28.116160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:28.170098  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:28.170142  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:28.184820  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:28.184860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:28.261324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:28.261355  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:28.261384  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:30.837906  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:30.853871  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:30.853969  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:30.896197  433881 cri.go:89] found id: ""
	I0408 12:50:30.896228  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.896237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:30.896244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:30.896296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:30.938689  433881 cri.go:89] found id: ""
	I0408 12:50:30.938726  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.938740  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:30.938758  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:30.938840  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:30.980883  433881 cri.go:89] found id: ""
	I0408 12:50:30.980918  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.980929  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:30.980937  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:30.981008  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:31.018262  433881 cri.go:89] found id: ""
	I0408 12:50:31.018291  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.018305  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:31.018314  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:31.018382  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:31.055397  433881 cri.go:89] found id: ""
	I0408 12:50:31.055430  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.055443  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:31.055452  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:31.055527  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:31.091476  433881 cri.go:89] found id: ""
	I0408 12:50:31.091511  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.091523  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:31.091531  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:31.091583  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:31.130285  433881 cri.go:89] found id: ""
	I0408 12:50:31.130326  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.130337  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:31.130345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:31.130419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:31.168196  433881 cri.go:89] found id: ""
	I0408 12:50:31.168227  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.168236  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:31.168246  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:31.168258  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:31.220612  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:31.220652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:31.236718  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:31.236754  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:31.310550  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:31.310574  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:31.310588  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:31.387376  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:31.387420  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:27.429985  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:29.928718  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:31.106901  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.606293  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:32.717659  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.217468  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.932307  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:33.946664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:33.946754  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:33.991321  433881 cri.go:89] found id: ""
	I0408 12:50:33.991359  433881 logs.go:276] 0 containers: []
	W0408 12:50:33.991371  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:33.991381  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:33.991451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:34.033989  433881 cri.go:89] found id: ""
	I0408 12:50:34.034024  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.034034  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:34.034041  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:34.034125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:34.081140  433881 cri.go:89] found id: ""
	I0408 12:50:34.081183  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.081192  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:34.081199  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:34.081258  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:34.122332  433881 cri.go:89] found id: ""
	I0408 12:50:34.122365  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.122376  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:34.122384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:34.122451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:34.161307  433881 cri.go:89] found id: ""
	I0408 12:50:34.161353  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.161378  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:34.161387  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:34.161460  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:34.199664  433881 cri.go:89] found id: ""
	I0408 12:50:34.199715  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.199727  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:34.199736  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:34.199816  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:34.242044  433881 cri.go:89] found id: ""
	I0408 12:50:34.242077  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.242087  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:34.242094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:34.242159  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:34.277852  433881 cri.go:89] found id: ""
	I0408 12:50:34.277893  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.277908  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:34.277920  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:34.277940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:34.329572  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:34.329614  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:34.343823  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:34.343854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:34.422625  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:34.422652  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:34.422670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:34.504605  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:34.504653  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:31.928982  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.929758  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.930610  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:36.110235  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:38.606389  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.217645  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:39.218104  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.050790  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:37.065111  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:37.065179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:37.108541  433881 cri.go:89] found id: ""
	I0408 12:50:37.108573  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.108583  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:37.108590  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:37.108655  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:37.145207  433881 cri.go:89] found id: ""
	I0408 12:50:37.145241  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.145256  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:37.145264  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:37.145332  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:37.182788  433881 cri.go:89] found id: ""
	I0408 12:50:37.182823  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.182836  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:37.182844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:37.182917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:37.222780  433881 cri.go:89] found id: ""
	I0408 12:50:37.222804  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.222813  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:37.222819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:37.222884  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:37.261653  433881 cri.go:89] found id: ""
	I0408 12:50:37.261703  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.261715  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:37.261725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:37.261795  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:37.300613  433881 cri.go:89] found id: ""
	I0408 12:50:37.300642  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.300651  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:37.300657  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:37.300720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:37.344252  433881 cri.go:89] found id: ""
	I0408 12:50:37.344289  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.344302  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:37.344311  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:37.344380  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:37.382644  433881 cri.go:89] found id: ""
	I0408 12:50:37.382682  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.382695  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:37.382708  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:37.382725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:37.437205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:37.437248  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:37.451772  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:37.451806  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:37.535578  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:37.535604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:37.535618  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:37.618315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:37.618358  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.160025  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:40.173704  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:40.173770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:40.212527  433881 cri.go:89] found id: ""
	I0408 12:50:40.212564  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.212576  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:40.212584  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:40.212648  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:40.250802  433881 cri.go:89] found id: ""
	I0408 12:50:40.250833  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.250841  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:40.250848  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:40.250910  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:40.292534  433881 cri.go:89] found id: ""
	I0408 12:50:40.292565  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.292576  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:40.292584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:40.292641  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:40.329973  433881 cri.go:89] found id: ""
	I0408 12:50:40.330004  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.330017  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:40.330027  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:40.330083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:40.367351  433881 cri.go:89] found id: ""
	I0408 12:50:40.367381  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.367390  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:40.367397  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:40.367462  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:40.404499  433881 cri.go:89] found id: ""
	I0408 12:50:40.404535  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.404546  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:40.404556  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:40.404624  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:40.448208  433881 cri.go:89] found id: ""
	I0408 12:50:40.448244  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.448254  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:40.448263  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:40.448318  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:40.490191  433881 cri.go:89] found id: ""
	I0408 12:50:40.490225  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.490235  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:40.490246  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:40.490262  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:40.507079  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:40.507119  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:40.584844  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:40.584880  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:40.584905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:40.665416  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:40.665461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.710289  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:40.710331  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:38.429765  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.430575  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.607976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.106175  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:45.107548  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:41.716953  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.717149  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.267848  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:43.283094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:43.283192  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:43.321609  433881 cri.go:89] found id: ""
	I0408 12:50:43.321643  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.321655  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:43.321664  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:43.321732  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:43.361550  433881 cri.go:89] found id: ""
	I0408 12:50:43.361587  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.361599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:43.361608  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:43.361686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:43.398332  433881 cri.go:89] found id: ""
	I0408 12:50:43.398373  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.398386  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:43.398394  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:43.398463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:43.436808  433881 cri.go:89] found id: ""
	I0408 12:50:43.436836  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.436844  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:43.436850  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:43.436901  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:43.475475  433881 cri.go:89] found id: ""
	I0408 12:50:43.475512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.475524  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:43.475533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:43.475600  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:43.515481  433881 cri.go:89] found id: ""
	I0408 12:50:43.515512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.515521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:43.515530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:43.515599  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:43.555358  433881 cri.go:89] found id: ""
	I0408 12:50:43.555388  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.555410  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:43.555420  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:43.555476  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:43.590192  433881 cri.go:89] found id: ""
	I0408 12:50:43.590240  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.590253  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:43.590265  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:43.590281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:43.643642  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:43.643699  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:43.659375  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:43.659405  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:43.739721  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:43.739743  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:43.739760  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:43.821107  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:43.821152  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:46.364937  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:46.378208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:46.378295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:46.415217  433881 cri.go:89] found id: ""
	I0408 12:50:46.415251  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.415263  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:46.415272  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:46.415336  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:46.453886  433881 cri.go:89] found id: ""
	I0408 12:50:46.453921  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.453930  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:46.453936  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:46.453992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:46.491443  433881 cri.go:89] found id: ""
	I0408 12:50:46.491475  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.491488  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:46.491496  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:46.491565  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:46.535815  433881 cri.go:89] found id: ""
	I0408 12:50:46.535845  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.535854  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:46.535860  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:46.535921  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:46.577704  433881 cri.go:89] found id: ""
	I0408 12:50:46.577814  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.577826  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:46.577835  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:46.577915  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:42.928908  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:44.929425  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:47.606676  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.608190  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.217528  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:48.717623  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:50.729538  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.624693  433881 cri.go:89] found id: ""
	I0408 12:50:46.624723  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.624731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:46.624738  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:46.624791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:46.659410  433881 cri.go:89] found id: ""
	I0408 12:50:46.659462  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.659474  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:46.659482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:46.659547  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:46.694881  433881 cri.go:89] found id: ""
	I0408 12:50:46.694912  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.694926  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:46.694937  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:46.694954  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:46.751416  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:46.751464  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:46.767739  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:46.767779  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:46.854021  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:46.854050  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:46.854066  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.937214  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:46.937252  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.479829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:49.494083  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:49.494156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:49.532518  433881 cri.go:89] found id: ""
	I0408 12:50:49.532555  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.532563  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:49.532569  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:49.532632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:49.571054  433881 cri.go:89] found id: ""
	I0408 12:50:49.571086  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.571111  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:49.571119  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:49.571199  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:49.607025  433881 cri.go:89] found id: ""
	I0408 12:50:49.607061  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.607071  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:49.607080  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:49.607156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:49.646890  433881 cri.go:89] found id: ""
	I0408 12:50:49.646921  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.646930  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:49.646939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:49.647009  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:49.688671  433881 cri.go:89] found id: ""
	I0408 12:50:49.688707  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.688719  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:49.688728  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:49.688800  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:49.726687  433881 cri.go:89] found id: ""
	I0408 12:50:49.726724  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.726735  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:49.726741  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:49.726808  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:49.767311  433881 cri.go:89] found id: ""
	I0408 12:50:49.767344  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.767353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:49.767360  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:49.767414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:49.803409  433881 cri.go:89] found id: ""
	I0408 12:50:49.803442  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.803452  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:49.803463  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:49.803478  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.842738  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:49.842767  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:49.895264  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:49.895318  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:49.910300  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:49.910332  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:50.005994  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:50.006031  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:50.006048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.929626  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.429810  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.106861  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.608143  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:53.217707  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:55.718120  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.589266  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:52.603202  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:52.603308  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:52.640493  433881 cri.go:89] found id: ""
	I0408 12:50:52.640525  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.640540  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:52.640550  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:52.640613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:52.680230  433881 cri.go:89] found id: ""
	I0408 12:50:52.680271  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.680284  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:52.680293  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:52.680355  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:52.724048  433881 cri.go:89] found id: ""
	I0408 12:50:52.724084  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.724096  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:52.724104  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:52.724171  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:52.776926  433881 cri.go:89] found id: ""
	I0408 12:50:52.776960  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.776973  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:52.776982  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:52.777059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:52.814738  433881 cri.go:89] found id: ""
	I0408 12:50:52.814770  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.814781  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:52.814788  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:52.814842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:52.854463  433881 cri.go:89] found id: ""
	I0408 12:50:52.854501  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.854511  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:52.854521  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:52.854591  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:52.896180  433881 cri.go:89] found id: ""
	I0408 12:50:52.896209  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.896218  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:52.896224  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:52.896279  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:52.931890  433881 cri.go:89] found id: ""
	I0408 12:50:52.931932  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.931944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:52.931956  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:52.931973  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:53.013345  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:53.013368  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:53.013385  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:53.092792  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:53.092834  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:53.142678  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:53.142713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:53.196378  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:53.196429  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:55.713265  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:55.729253  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:55.729341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:55.772259  433881 cri.go:89] found id: ""
	I0408 12:50:55.772303  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.772317  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:55.772325  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:55.772398  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:55.816146  433881 cri.go:89] found id: ""
	I0408 12:50:55.816178  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.816188  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:55.816194  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:55.816247  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:55.857896  433881 cri.go:89] found id: ""
	I0408 12:50:55.857935  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.857947  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:55.857955  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:55.858025  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:55.896337  433881 cri.go:89] found id: ""
	I0408 12:50:55.896374  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.896386  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:55.896395  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:55.896463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:55.936373  433881 cri.go:89] found id: ""
	I0408 12:50:55.936419  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.936430  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:55.936439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:55.936514  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:55.996751  433881 cri.go:89] found id: ""
	I0408 12:50:55.996782  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.996793  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:55.996802  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:55.996866  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:56.038910  433881 cri.go:89] found id: ""
	I0408 12:50:56.038948  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.038956  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:56.038962  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:56.039018  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:56.078147  433881 cri.go:89] found id: ""
	I0408 12:50:56.078185  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.078195  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:56.078206  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:56.078223  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:56.137679  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:56.137725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:56.153067  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:56.153101  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:56.242398  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:56.242422  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:56.242436  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:56.325353  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:56.325402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:51.929891  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.430216  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:57.106572  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.108219  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.216315  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:00.218162  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.867789  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:58.881570  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:58.881640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:58.918941  433881 cri.go:89] found id: ""
	I0408 12:50:58.918971  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.918980  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:58.918987  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:58.919041  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:58.956339  433881 cri.go:89] found id: ""
	I0408 12:50:58.956375  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.956387  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:58.956395  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:58.956448  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:58.998045  433881 cri.go:89] found id: ""
	I0408 12:50:58.998075  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.998087  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:58.998113  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:58.998186  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:59.037694  433881 cri.go:89] found id: ""
	I0408 12:50:59.037724  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.037736  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:59.037744  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:59.037813  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:59.079404  433881 cri.go:89] found id: ""
	I0408 12:50:59.079436  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.079448  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:59.079458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:59.079525  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:59.117535  433881 cri.go:89] found id: ""
	I0408 12:50:59.117566  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.117585  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:59.117593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:59.117661  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:59.163144  433881 cri.go:89] found id: ""
	I0408 12:50:59.163177  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.163190  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:59.163200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:59.163295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:59.201778  433881 cri.go:89] found id: ""
	I0408 12:50:59.201815  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.201827  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:59.201840  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:59.201857  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:59.256688  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:59.256730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:59.272631  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:59.272670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:59.345194  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:59.345219  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:59.345233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:59.420807  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:59.420873  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:56.931254  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.429578  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.606793  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.105581  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:02.218796  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.718232  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.966779  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:01.992790  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:01.992868  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:02.032532  433881 cri.go:89] found id: ""
	I0408 12:51:02.032580  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.032592  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:02.032603  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:02.032684  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:02.070377  433881 cri.go:89] found id: ""
	I0408 12:51:02.070405  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.070412  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:02.070418  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:02.070481  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:02.109543  433881 cri.go:89] found id: ""
	I0408 12:51:02.109569  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.109577  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:02.109584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:02.109639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:02.148009  433881 cri.go:89] found id: ""
	I0408 12:51:02.148049  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.148062  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:02.148078  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:02.148144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:02.184318  433881 cri.go:89] found id: ""
	I0408 12:51:02.184351  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.184362  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:02.184371  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:02.184469  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:02.225491  433881 cri.go:89] found id: ""
	I0408 12:51:02.225534  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.225545  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:02.225554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:02.225628  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:02.269401  433881 cri.go:89] found id: ""
	I0408 12:51:02.269439  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.269447  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:02.269454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:02.269517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:02.310153  433881 cri.go:89] found id: ""
	I0408 12:51:02.310189  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.310197  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:02.310209  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:02.310224  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:02.326077  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:02.326111  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:02.402369  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:02.402394  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:02.402410  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:02.483819  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:02.483866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:02.527581  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:02.527628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:05.083167  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:05.097986  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:05.098063  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:05.139396  433881 cri.go:89] found id: ""
	I0408 12:51:05.139434  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.139446  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:05.139464  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:05.139568  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:05.176882  433881 cri.go:89] found id: ""
	I0408 12:51:05.176918  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.176931  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:05.176940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:05.177012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:05.216426  433881 cri.go:89] found id: ""
	I0408 12:51:05.216459  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.216478  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:05.216486  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:05.216598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:05.254724  433881 cri.go:89] found id: ""
	I0408 12:51:05.254754  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.254762  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:05.254768  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:05.254821  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:05.291361  433881 cri.go:89] found id: ""
	I0408 12:51:05.291388  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.291397  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:05.291403  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:05.291453  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:05.329102  433881 cri.go:89] found id: ""
	I0408 12:51:05.329134  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.329145  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:05.329152  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:05.329216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:05.368614  433881 cri.go:89] found id: ""
	I0408 12:51:05.368657  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.368666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:05.368674  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:05.368727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:05.412151  433881 cri.go:89] found id: ""
	I0408 12:51:05.412182  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.412196  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:05.412208  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:05.412227  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:05.428329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:05.428364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:05.509452  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:05.509481  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:05.509500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:05.586831  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:05.586882  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:05.636175  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:05.636213  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:01.929336  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:03.929754  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.429604  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.106159  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.608247  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:07.216779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:09.217275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.189786  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:08.205609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:08.205686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:08.256556  433881 cri.go:89] found id: ""
	I0408 12:51:08.256586  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.256597  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:08.256607  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:08.256664  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:08.309126  433881 cri.go:89] found id: ""
	I0408 12:51:08.309163  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.309176  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:08.309184  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:08.309259  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:08.350669  433881 cri.go:89] found id: ""
	I0408 12:51:08.350699  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.350708  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:08.350716  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:08.350766  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:08.392122  433881 cri.go:89] found id: ""
	I0408 12:51:08.392156  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.392164  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:08.392171  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:08.392223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:08.435571  433881 cri.go:89] found id: ""
	I0408 12:51:08.435603  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.435616  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:08.435624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:08.435708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.474285  433881 cri.go:89] found id: ""
	I0408 12:51:08.474322  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.474334  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:08.474345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:08.474419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:08.521060  433881 cri.go:89] found id: ""
	I0408 12:51:08.521101  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.521109  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:08.521116  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:08.521185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:08.559967  433881 cri.go:89] found id: ""
	I0408 12:51:08.560013  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.560026  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:08.560051  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:08.560068  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:08.614926  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:08.614966  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:08.639012  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:08.639059  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:08.755572  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:08.755604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:08.755621  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:08.836005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:08.836050  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:11.383048  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:11.397692  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:11.397763  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:11.439445  433881 cri.go:89] found id: ""
	I0408 12:51:11.439482  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.439494  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:11.439503  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:11.439558  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:11.478262  433881 cri.go:89] found id: ""
	I0408 12:51:11.478297  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.478309  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:11.478318  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:11.478392  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:11.518012  433881 cri.go:89] found id: ""
	I0408 12:51:11.518049  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.518063  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:11.518071  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:11.518137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:11.557519  433881 cri.go:89] found id: ""
	I0408 12:51:11.557551  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.557563  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:11.557571  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:11.557644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:11.595494  433881 cri.go:89] found id: ""
	I0408 12:51:11.595528  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.595541  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:11.595550  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:11.595622  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.929238  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:10.929862  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.107603  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.611978  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.718498  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.635667  433881 cri.go:89] found id: ""
	I0408 12:51:11.635719  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.635731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:11.635740  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:11.635806  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:11.675521  433881 cri.go:89] found id: ""
	I0408 12:51:11.675553  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.675562  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:11.675568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:11.675623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:11.720983  433881 cri.go:89] found id: ""
	I0408 12:51:11.721016  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.721029  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:11.721041  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:11.721055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:11.775418  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:11.775462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:11.790019  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:11.790061  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:11.867479  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:11.867512  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:11.867530  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:11.944546  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:11.944594  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:14.487829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:14.501277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:14.501356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:14.539996  433881 cri.go:89] found id: ""
	I0408 12:51:14.540031  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.540043  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:14.540054  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:14.540125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:14.580611  433881 cri.go:89] found id: ""
	I0408 12:51:14.580646  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.580658  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:14.580667  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:14.580729  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:14.623459  433881 cri.go:89] found id: ""
	I0408 12:51:14.623497  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.623509  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:14.623518  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:14.623593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:14.666904  433881 cri.go:89] found id: ""
	I0408 12:51:14.666944  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.666953  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:14.666959  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:14.667012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:14.709136  433881 cri.go:89] found id: ""
	I0408 12:51:14.709169  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.709178  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:14.709183  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:14.709234  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:14.757342  433881 cri.go:89] found id: ""
	I0408 12:51:14.757377  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.757390  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:14.757402  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:14.757477  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:14.795210  433881 cri.go:89] found id: ""
	I0408 12:51:14.795249  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.795262  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:14.795271  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:14.795329  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:14.833782  433881 cri.go:89] found id: ""
	I0408 12:51:14.833813  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.833821  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:14.833831  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:14.833843  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:14.892985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:14.893030  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:14.909567  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:14.909615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:14.988447  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:14.988473  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:14.988494  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:15.068404  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:15.068446  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:12.931867  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:15.430299  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.106552  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.106622  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.108053  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.217595  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.217758  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.220115  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:17.617145  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:17.630439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:17.630520  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:17.672814  433881 cri.go:89] found id: ""
	I0408 12:51:17.672845  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.672853  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:17.672860  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:17.672936  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:17.715344  433881 cri.go:89] found id: ""
	I0408 12:51:17.715378  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.715391  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:17.715399  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:17.715464  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:17.757246  433881 cri.go:89] found id: ""
	I0408 12:51:17.757283  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.757295  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:17.757304  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:17.757373  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:17.798201  433881 cri.go:89] found id: ""
	I0408 12:51:17.798236  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.798245  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:17.798250  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:17.798312  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:17.838243  433881 cri.go:89] found id: ""
	I0408 12:51:17.838280  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.838296  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:17.838305  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:17.838376  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:17.877394  433881 cri.go:89] found id: ""
	I0408 12:51:17.877433  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.877446  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:17.877455  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:17.877522  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:17.917513  433881 cri.go:89] found id: ""
	I0408 12:51:17.917546  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.917557  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:17.917564  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:17.917631  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:17.959806  433881 cri.go:89] found id: ""
	I0408 12:51:17.959841  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.959854  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:17.959872  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:17.959888  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:17.974835  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:17.974866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:18.051066  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:18.051096  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:18.051110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:18.130246  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:18.130294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:18.177977  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:18.178009  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:20.732943  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:20.747177  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:20.747250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:20.793434  433881 cri.go:89] found id: ""
	I0408 12:51:20.793462  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.793472  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:20.793478  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:20.793554  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:20.830880  433881 cri.go:89] found id: ""
	I0408 12:51:20.830915  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.830925  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:20.830931  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:20.830986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:20.865660  433881 cri.go:89] found id: ""
	I0408 12:51:20.865698  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.865710  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:20.865718  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:20.865787  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:20.905977  433881 cri.go:89] found id: ""
	I0408 12:51:20.906009  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.906018  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:20.906023  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:20.906078  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:20.949244  433881 cri.go:89] found id: ""
	I0408 12:51:20.949273  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.949281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:20.949288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:20.949346  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:20.987438  433881 cri.go:89] found id: ""
	I0408 12:51:20.987466  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.987475  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:20.987482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:20.987534  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:21.028061  433881 cri.go:89] found id: ""
	I0408 12:51:21.028106  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.028123  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:21.028130  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:21.028187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:21.065115  433881 cri.go:89] found id: ""
	I0408 12:51:21.065147  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.065160  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:21.065171  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:21.065186  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:21.142100  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:21.142143  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:21.186259  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:21.186294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:21.242038  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:21.242085  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:21.257483  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:21.257526  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:21.336027  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:17.930896  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.430609  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.108741  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.605215  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.716480  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.720217  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:23.836494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:23.850931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:23.851001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:23.889352  433881 cri.go:89] found id: ""
	I0408 12:51:23.889385  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.889397  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:23.889406  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:23.889467  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:23.925240  433881 cri.go:89] found id: ""
	I0408 12:51:23.925271  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.925280  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:23.925286  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:23.925341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:23.965369  433881 cri.go:89] found id: ""
	I0408 12:51:23.965398  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.965410  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:23.965417  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:23.965478  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:24.004828  433881 cri.go:89] found id: ""
	I0408 12:51:24.004864  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.004875  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:24.004882  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:24.004955  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:24.046959  433881 cri.go:89] found id: ""
	I0408 12:51:24.046996  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.047013  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:24.047022  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:24.047104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:24.085408  433881 cri.go:89] found id: ""
	I0408 12:51:24.085447  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.085459  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:24.085468  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:24.085533  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:24.124156  433881 cri.go:89] found id: ""
	I0408 12:51:24.124193  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.124205  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:24.124214  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:24.124280  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:24.159973  433881 cri.go:89] found id: ""
	I0408 12:51:24.160011  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.160023  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:24.160037  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:24.160055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:24.238998  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:24.239047  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:24.282401  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:24.282439  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:24.339279  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:24.339328  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:24.354927  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:24.354965  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:24.432192  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:22.929962  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:25.430340  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.605294  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:28.606623  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:27.218727  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.716524  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.932361  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:26.947709  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:26.947779  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:26.992251  433881 cri.go:89] found id: ""
	I0408 12:51:26.992282  433881 logs.go:276] 0 containers: []
	W0408 12:51:26.992290  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:26.992297  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:26.992366  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:27.033517  433881 cri.go:89] found id: ""
	I0408 12:51:27.033548  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.033560  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:27.033568  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:27.033635  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:27.072593  433881 cri.go:89] found id: ""
	I0408 12:51:27.072628  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.072641  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:27.072650  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:27.072726  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:27.115728  433881 cri.go:89] found id: ""
	I0408 12:51:27.115761  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.115771  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:27.115779  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:27.115846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:27.154218  433881 cri.go:89] found id: ""
	I0408 12:51:27.154254  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.154266  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:27.154274  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:27.154341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:27.193084  433881 cri.go:89] found id: ""
	I0408 12:51:27.193118  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.193134  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:27.193142  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:27.193216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:27.233401  433881 cri.go:89] found id: ""
	I0408 12:51:27.233436  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.233449  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:27.233458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:27.233524  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:27.274272  433881 cri.go:89] found id: ""
	I0408 12:51:27.274307  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.274316  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:27.274325  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:27.274339  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:27.316918  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:27.316956  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:27.371970  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:27.372014  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.387640  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:27.387679  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:27.468583  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:27.468611  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:27.468628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.049078  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:30.063661  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:30.063750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:30.102000  433881 cri.go:89] found id: ""
	I0408 12:51:30.102031  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.102049  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:30.102058  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:30.102120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:30.144972  433881 cri.go:89] found id: ""
	I0408 12:51:30.145001  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.145010  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:30.145017  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:30.145076  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:30.185179  433881 cri.go:89] found id: ""
	I0408 12:51:30.185250  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.185274  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:30.185284  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:30.185356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:30.224138  433881 cri.go:89] found id: ""
	I0408 12:51:30.224169  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.224178  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:30.224185  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:30.224245  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:30.262754  433881 cri.go:89] found id: ""
	I0408 12:51:30.262788  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.262800  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:30.262809  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:30.262872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:30.296574  433881 cri.go:89] found id: ""
	I0408 12:51:30.296608  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.296617  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:30.296624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:30.296685  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:30.337619  433881 cri.go:89] found id: ""
	I0408 12:51:30.337653  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.337665  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:30.337672  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:30.337737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:30.378808  433881 cri.go:89] found id: ""
	I0408 12:51:30.378837  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.378849  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:30.378860  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:30.378876  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:30.462867  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:30.462895  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:30.462911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.549824  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:30.549871  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:30.594270  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:30.594302  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:30.650199  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:30.650247  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.430647  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.929105  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:30.607227  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.106814  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.106890  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:31.716747  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.718962  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.166177  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:33.181168  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:33.181277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:33.220931  433881 cri.go:89] found id: ""
	I0408 12:51:33.220960  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.220970  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:33.220976  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:33.221043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:33.267118  433881 cri.go:89] found id: ""
	I0408 12:51:33.267155  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.267168  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:33.267177  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:33.267250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:33.308486  433881 cri.go:89] found id: ""
	I0408 12:51:33.308522  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.308532  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:33.308540  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:33.308614  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:33.344735  433881 cri.go:89] found id: ""
	I0408 12:51:33.344773  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.344785  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:33.344793  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:33.344857  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:33.384130  433881 cri.go:89] found id: ""
	I0408 12:51:33.384162  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.384175  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:33.384184  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:33.384246  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:33.422187  433881 cri.go:89] found id: ""
	I0408 12:51:33.422224  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.422236  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:33.422244  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:33.422309  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:33.462281  433881 cri.go:89] found id: ""
	I0408 12:51:33.462310  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.462320  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:33.462326  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:33.462412  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:33.501273  433881 cri.go:89] found id: ""
	I0408 12:51:33.501304  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.501315  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:33.501329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:33.501347  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:33.573407  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:33.573435  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:33.573453  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:33.659573  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:33.659615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:33.712568  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:33.712600  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:33.769457  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:33.769500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.285759  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:36.302490  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:36.302576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:36.341170  433881 cri.go:89] found id: ""
	I0408 12:51:36.341204  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.341218  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:36.341227  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:36.341296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:36.380366  433881 cri.go:89] found id: ""
	I0408 12:51:36.380395  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.380403  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:36.380411  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:36.380485  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:36.428755  433881 cri.go:89] found id: ""
	I0408 12:51:36.428786  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.428795  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:36.428801  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:36.428852  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:36.473849  433881 cri.go:89] found id: ""
	I0408 12:51:36.473893  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.473921  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:36.473930  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:36.474001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:36.513922  433881 cri.go:89] found id: ""
	I0408 12:51:36.513967  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.513980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:36.513989  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:36.514059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:36.557731  433881 cri.go:89] found id: ""
	I0408 12:51:36.557768  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.557777  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:36.557784  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:36.557861  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:36.601978  433881 cri.go:89] found id: ""
	I0408 12:51:36.602010  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.602020  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:36.602031  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:36.602099  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:31.930145  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.931893  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.932546  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:37.606783  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:39.607738  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.217708  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:38.717067  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.721801  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.645189  433881 cri.go:89] found id: ""
	I0408 12:51:36.645226  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.645244  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:36.645257  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:36.645276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:36.739293  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:36.739346  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:36.786962  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:36.787001  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:36.842456  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:36.842499  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.857848  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:36.857883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:36.939227  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:39.440047  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:39.456206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:39.456304  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:39.497752  433881 cri.go:89] found id: ""
	I0408 12:51:39.497792  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.497804  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:39.497815  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:39.497882  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:39.536192  433881 cri.go:89] found id: ""
	I0408 12:51:39.536224  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.536237  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:39.536245  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:39.536315  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:39.573874  433881 cri.go:89] found id: ""
	I0408 12:51:39.573917  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.573932  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:39.573939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:39.574004  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:39.614525  433881 cri.go:89] found id: ""
	I0408 12:51:39.614562  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.614577  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:39.614585  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:39.614651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:39.654414  433881 cri.go:89] found id: ""
	I0408 12:51:39.654455  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.654467  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:39.654476  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:39.654549  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:39.691814  433881 cri.go:89] found id: ""
	I0408 12:51:39.691847  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.691860  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:39.691868  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:39.691939  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:39.735572  433881 cri.go:89] found id: ""
	I0408 12:51:39.735609  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.735622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:39.735630  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:39.735707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:39.778827  433881 cri.go:89] found id: ""
	I0408 12:51:39.778860  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.778870  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:39.778881  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:39.778894  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:39.857861  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:39.857903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:39.901597  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:39.901652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:39.955660  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:39.955730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:39.972424  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:39.972461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:40.052884  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:38.429490  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.932035  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:42.106879  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:44.607134  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:41.210350  433557 pod_ready.go:81] duration metric: took 4m0.000311819s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:41.210399  433557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0408 12:51:41.210413  433557 pod_ready.go:38] duration metric: took 4m3.201150727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:41.210464  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:51:41.210520  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:41.210591  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:41.269963  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:41.269999  433557 cri.go:89] found id: ""
	I0408 12:51:41.270010  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:41.270072  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.275411  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:41.275495  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:41.319478  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:41.319517  433557 cri.go:89] found id: ""
	I0408 12:51:41.319529  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:41.319590  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.329956  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:41.330045  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:41.380017  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:41.380049  433557 cri.go:89] found id: ""
	I0408 12:51:41.380061  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:41.380131  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.384973  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:41.385077  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:41.429757  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:41.429786  433557 cri.go:89] found id: ""
	I0408 12:51:41.429798  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:41.429863  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.435404  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:41.435488  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:41.484998  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:41.485031  433557 cri.go:89] found id: ""
	I0408 12:51:41.485042  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:41.485111  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.489802  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:41.489878  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:41.543982  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.544016  433557 cri.go:89] found id: ""
	I0408 12:51:41.544028  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:41.544096  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.548766  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:41.548836  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:41.588398  433557 cri.go:89] found id: ""
	I0408 12:51:41.588425  433557 logs.go:276] 0 containers: []
	W0408 12:51:41.588433  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:41.588439  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:41.588498  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:41.635748  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:41.635771  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:41.635775  433557 cri.go:89] found id: ""
	I0408 12:51:41.635782  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:41.635849  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.641800  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.646173  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:41.646206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.717189  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:41.717228  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:41.779618  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:41.779653  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:41.840050  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:41.840092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:41.855982  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:41.856016  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:42.016416  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:42.016455  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:42.085493  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:42.085538  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:42.132590  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:42.132626  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:42.642069  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:42.642125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:42.708516  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:42.708566  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:42.759072  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:42.759125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:42.810189  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:42.810242  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:42.855931  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:42.855971  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.396658  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.414640  433557 api_server.go:72] duration metric: took 4m14.728700184s to wait for apiserver process to appear ...
	I0408 12:51:45.414671  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:51:45.414714  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.414772  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.460983  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:45.461012  433557 cri.go:89] found id: ""
	I0408 12:51:45.461023  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:45.461102  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.466928  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.467037  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.516723  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:45.516746  433557 cri.go:89] found id: ""
	I0408 12:51:45.516755  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:45.516813  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.521315  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.521413  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.560838  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.560865  433557 cri.go:89] found id: ""
	I0408 12:51:45.560876  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:45.560926  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.565852  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.565937  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.610154  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:45.610175  433557 cri.go:89] found id: ""
	I0408 12:51:45.610183  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:45.610229  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.615014  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.615098  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.658261  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:45.658292  433557 cri.go:89] found id: ""
	I0408 12:51:45.658304  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:45.658367  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.663148  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.663242  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:45.708805  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.708838  433557 cri.go:89] found id: ""
	I0408 12:51:45.708850  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:45.708906  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.713733  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:45.713800  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:45.763432  433557 cri.go:89] found id: ""
	I0408 12:51:45.763465  433557 logs.go:276] 0 containers: []
	W0408 12:51:45.763477  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:45.763486  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:45.763555  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:45.808689  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:45.808711  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.808715  433557 cri.go:89] found id: ""
	I0408 12:51:45.808723  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:45.808782  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.813386  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.818556  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:45.818589  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:42.553021  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:42.569100  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:42.569174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:42.612835  433881 cri.go:89] found id: ""
	I0408 12:51:42.612870  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.612882  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:42.612891  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:42.612965  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:42.653224  433881 cri.go:89] found id: ""
	I0408 12:51:42.653266  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.653276  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:42.653285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:42.653351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:42.703612  433881 cri.go:89] found id: ""
	I0408 12:51:42.703648  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.703658  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:42.703664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:42.703756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:42.749765  433881 cri.go:89] found id: ""
	I0408 12:51:42.749799  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.749810  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:42.749818  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:42.749894  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:42.794008  433881 cri.go:89] found id: ""
	I0408 12:51:42.794042  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.794054  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:42.794064  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:42.794132  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:42.838099  433881 cri.go:89] found id: ""
	I0408 12:51:42.838134  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.838146  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:42.838154  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:42.838223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:42.883552  433881 cri.go:89] found id: ""
	I0408 12:51:42.883589  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.883602  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:42.883615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:42.883712  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:42.922871  433881 cri.go:89] found id: ""
	I0408 12:51:42.922899  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.922910  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:42.922922  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:42.922958  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:42.979842  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:42.979885  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:42.995164  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:42.995198  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:43.075880  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:43.075906  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:43.075940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:43.164047  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:43.164113  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:45.733586  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.749054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.749158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.793132  433881 cri.go:89] found id: ""
	I0408 12:51:45.793169  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.793181  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:45.793189  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.793257  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.834562  433881 cri.go:89] found id: ""
	I0408 12:51:45.834597  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.834608  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:45.834616  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.834686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.876365  433881 cri.go:89] found id: ""
	I0408 12:51:45.876404  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.876415  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:45.876424  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.876489  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.926205  433881 cri.go:89] found id: ""
	I0408 12:51:45.926241  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.926254  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:45.926262  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.926331  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.969462  433881 cri.go:89] found id: ""
	I0408 12:51:45.969494  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.969506  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:45.969513  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.969582  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:46.011980  433881 cri.go:89] found id: ""
	I0408 12:51:46.012008  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.012031  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:46.012040  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:46.012098  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:46.054484  433881 cri.go:89] found id: ""
	I0408 12:51:46.054522  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.054533  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:46.054542  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:46.054609  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:46.094438  433881 cri.go:89] found id: ""
	I0408 12:51:46.094468  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.094477  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:46.094486  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.094503  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:46.186390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:46.186415  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.186437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.283200  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.283240  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:46.336507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.336544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.392178  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.392221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:43.429577  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:45.431057  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:47.106109  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:48.599265  433674 pod_ready.go:81] duration metric: took 4m0.000260398s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:48.599302  433674 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:51:48.599335  433674 pod_ready.go:38] duration metric: took 4m13.995684279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:48.599373  433674 kubeadm.go:591] duration metric: took 4m22.072516751s to restartPrimaryControlPlane
	W0408 12:51:48.599529  433674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:48.599619  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:45.864458  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:45.864503  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.907964  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:45.908000  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.980082  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:45.980123  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:46.041294  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:46.041330  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:46.102117  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:46.102171  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:46.188553  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:46.188583  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:46.234191  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:46.234229  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:46.281240  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.281273  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.721047  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.721092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.781387  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.781429  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:46.797003  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.797043  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:46.917073  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.917109  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:49.481948  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:51:49.488261  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:51:49.489694  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:51:49.489726  433557 api_server.go:131] duration metric: took 4.075047023s to wait for apiserver health ...
	I0408 12:51:49.489737  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:51:49.489772  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:49.489845  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:49.535955  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.535980  433557 cri.go:89] found id: ""
	I0408 12:51:49.535990  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:49.536052  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.543143  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:49.543239  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.590041  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:49.590075  433557 cri.go:89] found id: ""
	I0408 12:51:49.590087  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:49.590155  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.595726  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.595803  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.645009  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:49.645046  433557 cri.go:89] found id: ""
	I0408 12:51:49.645057  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:49.645110  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.650243  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.650329  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.693859  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:49.693882  433557 cri.go:89] found id: ""
	I0408 12:51:49.693895  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:49.693972  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.699620  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.699709  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.755614  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:49.755646  433557 cri.go:89] found id: ""
	I0408 12:51:49.755657  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:49.755739  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.761838  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.761913  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.808919  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:49.808950  433557 cri.go:89] found id: ""
	I0408 12:51:49.808961  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:49.809040  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.813965  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.814046  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.859700  433557 cri.go:89] found id: ""
	I0408 12:51:49.859737  433557 logs.go:276] 0 containers: []
	W0408 12:51:49.859748  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.859757  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:49.859832  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:49.908020  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:49.908044  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:49.908050  433557 cri.go:89] found id: ""
	I0408 12:51:49.908060  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:49.908129  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.913034  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.919193  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:49.919233  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.984657  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.984704  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:50.003487  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:50.003526  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:50.139417  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:50.139481  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:50.240166  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:50.240206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:50.288776  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:50.288823  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:50.339222  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:50.339252  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:50.402263  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:50.402308  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:50.461894  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:50.461946  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:50.507329  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:50.507373  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:50.576851  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:50.576894  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:48.908956  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:48.932321  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:48.932414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:48.988509  433881 cri.go:89] found id: ""
	I0408 12:51:48.988542  433881 logs.go:276] 0 containers: []
	W0408 12:51:48.988554  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:48.988563  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:48.988632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.026573  433881 cri.go:89] found id: ""
	I0408 12:51:49.026605  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.026613  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:49.026618  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.026681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.072625  433881 cri.go:89] found id: ""
	I0408 12:51:49.072661  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.072675  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:49.072684  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.072748  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.120630  433881 cri.go:89] found id: ""
	I0408 12:51:49.120662  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.120674  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:49.120683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.120743  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.169189  433881 cri.go:89] found id: ""
	I0408 12:51:49.169218  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.169231  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:49.169239  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.169307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.216077  433881 cri.go:89] found id: ""
	I0408 12:51:49.216115  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.216128  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:49.216141  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.216209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.258519  433881 cri.go:89] found id: ""
	I0408 12:51:49.258556  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.258568  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.258576  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:49.258658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:49.298058  433881 cri.go:89] found id: ""
	I0408 12:51:49.298092  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.298103  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:49.298117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:49.298133  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:49.351961  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.352020  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:49.369774  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:49.369822  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:49.465570  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:49.465598  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:49.465616  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:49.551701  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:49.551753  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:47.932221  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.430702  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.947824  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:50.947878  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:51.007034  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:51.007084  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:53.563768  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:51:53.563811  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.563818  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.563824  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.563829  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.563835  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.563840  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.563850  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.563857  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.563870  433557 system_pods.go:74] duration metric: took 4.074125222s to wait for pod list to return data ...
	I0408 12:51:53.563884  433557 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:51:53.566991  433557 default_sa.go:45] found service account: "default"
	I0408 12:51:53.567015  433557 default_sa.go:55] duration metric: took 3.122873ms for default service account to be created ...
	I0408 12:51:53.567024  433557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:51:53.574517  433557 system_pods.go:86] 8 kube-system pods found
	I0408 12:51:53.574558  433557 system_pods.go:89] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.574565  433557 system_pods.go:89] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.574570  433557 system_pods.go:89] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.574575  433557 system_pods.go:89] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.574581  433557 system_pods.go:89] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.574587  433557 system_pods.go:89] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.574598  433557 system_pods.go:89] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.574605  433557 system_pods.go:89] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.574616  433557 system_pods.go:126] duration metric: took 7.585497ms to wait for k8s-apps to be running ...
	I0408 12:51:53.574629  433557 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:51:53.574720  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:53.597605  433557 system_svc.go:56] duration metric: took 22.957663ms WaitForService to wait for kubelet
	I0408 12:51:53.597658  433557 kubeadm.go:576] duration metric: took 4m22.91172229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:51:53.597683  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:51:53.601940  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:51:53.601992  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:51:53.602009  433557 node_conditions.go:105] duration metric: took 4.320913ms to run NodePressure ...
	I0408 12:51:53.602028  433557 start.go:240] waiting for startup goroutines ...
	I0408 12:51:53.602040  433557 start.go:245] waiting for cluster config update ...
	I0408 12:51:53.602060  433557 start.go:254] writing updated cluster config ...
	I0408 12:51:53.602426  433557 ssh_runner.go:195] Run: rm -f paused
	I0408 12:51:53.660257  433557 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0408 12:51:53.662533  433557 out.go:177] * Done! kubectl is now configured to use "no-preload-135234" cluster and "default" namespace by default
	I0408 12:51:52.104186  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:52.125116  433881 kubeadm.go:591] duration metric: took 4m3.004969382s to restartPrimaryControlPlane
	W0408 12:51:52.125203  433881 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:52.125233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:54.046318  433881 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.921055247s)
	I0408 12:51:54.046411  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:54.061948  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:51:54.073014  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:51:54.083545  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:51:54.083566  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:51:54.083623  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:51:54.093457  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:51:54.093541  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:51:54.104924  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:51:54.114649  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:51:54.114733  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:51:54.125143  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.135209  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:51:54.135283  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.146586  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:51:54.157676  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:51:54.157740  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:51:54.168585  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:51:54.411949  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:51:52.434513  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:54.930343  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:57.432046  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:59.436031  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:01.930142  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:03.931249  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:06.429806  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:08.929311  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:10.929707  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:13.430287  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:15.430449  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:17.933664  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:20.428983  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:21.300307  433674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.700649463s)
	I0408 12:52:21.300429  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:21.321628  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:21.334359  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:21.345697  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:21.345755  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:21.345804  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:52:21.356798  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:21.356868  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:21.368622  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:52:21.379589  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:21.379676  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:21.391211  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.401783  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:21.401874  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.413655  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:52:21.424585  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:21.424673  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:21.436887  433674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:21.495891  433674 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:21.496022  433674 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:21.667820  433674 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:21.667973  433674 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:21.668100  433674 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:21.904532  433674 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:21.906631  433674 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:21.906736  433674 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:21.906833  433674 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:21.906962  433674 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:21.907084  433674 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:21.907206  433674 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:21.907283  433674 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:21.907372  433674 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:21.907705  433674 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:21.908164  433674 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:21.908536  433674 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:21.908852  433674 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:21.908942  433674 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:22.096319  433674 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:22.286425  433674 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:22.442534  433674 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:22.542901  433674 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:22.959098  433674 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:22.959656  433674 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:22.962359  433674 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:22.965011  433674 out.go:204]   - Booting up control plane ...
	I0408 12:52:22.965148  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:22.965830  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:22.966718  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:22.987425  433674 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:22.988618  433674 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:22.988690  433674 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:23.134634  433674 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:52:22.429735  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.431237  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.923026  433439 pod_ready.go:81] duration metric: took 4m0.000804438s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	E0408 12:52:24.923079  433439 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:52:24.923103  433439 pod_ready.go:38] duration metric: took 4m6.498748448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:24.923143  433439 kubeadm.go:591] duration metric: took 4m14.484131334s to restartPrimaryControlPlane
	W0408 12:52:24.923222  433439 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:52:24.923260  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:52:29.641484  433674 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.505486 seconds
	I0408 12:52:29.659612  433674 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:52:29.683882  433674 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:52:30.237806  433674 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:52:30.238135  433674 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-488947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:52:30.755095  433674 kubeadm.go:309] [bootstrap-token] Using token: kwhj7g.e2hm9yupaxknooep
	I0408 12:52:30.756904  433674 out.go:204]   - Configuring RBAC rules ...
	I0408 12:52:30.757044  433674 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:52:30.763322  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:52:30.776489  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:52:30.780180  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:52:30.784949  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:52:30.789409  433674 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:52:30.810228  433674 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:52:31.071672  433674 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:52:31.180390  433674 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:52:31.180421  433674 kubeadm.go:309] 
	I0408 12:52:31.180493  433674 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:52:31.180504  433674 kubeadm.go:309] 
	I0408 12:52:31.180626  433674 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:52:31.180652  433674 kubeadm.go:309] 
	I0408 12:52:31.180682  433674 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:52:31.180758  433674 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:52:31.180823  433674 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:52:31.180835  433674 kubeadm.go:309] 
	I0408 12:52:31.180898  433674 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:52:31.180908  433674 kubeadm.go:309] 
	I0408 12:52:31.180967  433674 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:52:31.180978  433674 kubeadm.go:309] 
	I0408 12:52:31.181069  433674 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:52:31.181200  433674 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:52:31.181301  433674 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:52:31.181312  433674 kubeadm.go:309] 
	I0408 12:52:31.181446  433674 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:52:31.181564  433674 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:52:31.181577  433674 kubeadm.go:309] 
	I0408 12:52:31.181706  433674 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.181869  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:52:31.181923  433674 kubeadm.go:309] 	--control-plane 
	I0408 12:52:31.181933  433674 kubeadm.go:309] 
	I0408 12:52:31.182039  433674 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:52:31.182055  433674 kubeadm.go:309] 
	I0408 12:52:31.182167  433674 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.182323  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:52:31.182467  433674 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:52:31.182492  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:52:31.182502  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:52:31.184299  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:52:31.185716  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:52:31.217708  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:52:31.277627  433674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:52:31.277716  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:31.277740  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-488947 minikube.k8s.io/updated_at=2024_04_08T12_52_31_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=embed-certs-488947 minikube.k8s.io/primary=true
	I0408 12:52:31.591490  433674 ops.go:34] apiserver oom_adj: -16
	I0408 12:52:31.591651  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.092642  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.591845  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.092645  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.592585  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.092066  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.592232  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.091882  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.591794  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.091849  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.592616  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.091816  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.091756  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.592114  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.092524  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.591838  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.091853  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.591747  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.092421  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.592611  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.092369  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.092638  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.592549  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.091831  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.592358  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.799776  433674 kubeadm.go:1107] duration metric: took 13.522136387s to wait for elevateKubeSystemPrivileges
	W0408 12:52:44.799833  433674 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:52:44.799845  433674 kubeadm.go:393] duration metric: took 5m18.325910079s to StartCluster
	I0408 12:52:44.799870  433674 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.799981  433674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:52:44.802396  433674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.802704  433674 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:52:44.804525  433674 out.go:177] * Verifying Kubernetes components...
	I0408 12:52:44.802776  433674 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:52:44.802921  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:52:44.805724  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:52:44.805735  433674 addons.go:69] Setting metrics-server=true in profile "embed-certs-488947"
	I0408 12:52:44.805751  433674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-488947"
	I0408 12:52:44.805777  433674 addons.go:234] Setting addon metrics-server=true in "embed-certs-488947"
	W0408 12:52:44.805792  433674 addons.go:243] addon metrics-server should already be in state true
	I0408 12:52:44.805824  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805727  433674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-488947"
	I0408 12:52:44.805869  433674 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-488947"
	W0408 12:52:44.805883  433674 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:52:44.805915  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805834  433674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-488947"
	I0408 12:52:44.806260  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806262  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806266  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806286  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806288  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806326  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.824170  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I0408 12:52:44.824862  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.825517  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.825547  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.826049  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.826714  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.826752  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.827345  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0408 12:52:44.827569  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0408 12:52:44.828195  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828218  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828860  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.828892  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829023  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.829040  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829499  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829541  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829687  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.830201  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.830247  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.834128  433674 addons.go:234] Setting addon default-storageclass=true in "embed-certs-488947"
	W0408 12:52:44.834156  433674 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:52:44.834189  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.834569  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.834611  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.845829  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0408 12:52:44.846556  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.847545  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.847571  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.848210  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.848478  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.850407  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.850783  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0408 12:52:44.853144  433674 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:52:44.851322  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.854214  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0408 12:52:44.855198  433674 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:44.855222  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:52:44.855245  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.855434  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.855766  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855797  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.855936  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855956  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.856190  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856264  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856382  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.856937  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.856973  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.857994  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.859623  433674 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:52:44.860991  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:52:44.861012  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:52:44.858778  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.861032  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.861051  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.861072  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.859293  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.861282  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.861617  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.861817  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.863813  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864274  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.864299  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864483  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.864681  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.864846  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.865028  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.874355  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0408 12:52:44.874834  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.875388  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.875418  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.875775  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.875967  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.877519  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.877786  433674 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:44.877803  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:52:44.877818  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.880463  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.880846  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.880874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.881040  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.881234  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.881615  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.881753  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:45.057304  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:52:45.082702  433674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.091955  433674 node_ready.go:49] node "embed-certs-488947" has status "Ready":"True"
	I0408 12:52:45.091994  433674 node_ready.go:38] duration metric: took 9.246027ms for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.092007  433674 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:45.101654  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:45.237037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:52:45.237068  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:52:45.238421  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:45.274088  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:45.295037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:52:45.295078  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:52:45.397474  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:45.397504  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:52:45.431610  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:46.375681  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101541881s)
	I0408 12:52:46.375827  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.375862  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376204  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.376244  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.137166571s)
	I0408 12:52:46.376271  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.376291  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.376309  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376313  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376319  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.377184  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377205  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377613  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.377680  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377699  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377709  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.377747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.378168  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.378182  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.413325  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.413361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.413757  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.413780  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.679538  433674 pod_ready.go:92] pod "coredns-76f75df574-4gdp4" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.679577  433674 pod_ready.go:81] duration metric: took 1.577895468s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.679596  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760007  433674 pod_ready.go:92] pod "coredns-76f75df574-r5rxq" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.760043  433674 pod_ready.go:81] duration metric: took 80.437752ms for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760059  433674 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.803070  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.371401052s)
	I0408 12:52:46.803136  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803150  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803496  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803519  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803530  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803539  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803846  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803862  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803882  433674 addons.go:470] Verifying addon metrics-server=true in "embed-certs-488947"
	I0408 12:52:46.806034  433674 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0408 12:52:46.804164  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.807597  433674 pod_ready.go:81] duration metric: took 47.521367ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807622  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807621  433674 addons.go:505] duration metric: took 2.004847213s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0408 12:52:46.827049  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.827075  433674 pod_ready.go:81] duration metric: took 19.440746ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.827086  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848718  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.848759  433674 pod_ready.go:81] duration metric: took 21.664037ms for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848775  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087350  433674 pod_ready.go:92] pod "kube-proxy-mqrtp" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.087387  433674 pod_ready.go:81] duration metric: took 238.602902ms for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087403  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486822  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.486863  433674 pod_ready.go:81] duration metric: took 399.44977ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486875  433674 pod_ready.go:38] duration metric: took 2.394853452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:47.486895  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:52:47.486967  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:52:47.517426  433674 api_server.go:72] duration metric: took 2.714672176s to wait for apiserver process to appear ...
	I0408 12:52:47.517461  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:52:47.517492  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:52:47.527074  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:52:47.528230  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:52:47.528285  433674 api_server.go:131] duration metric: took 10.815426ms to wait for apiserver health ...
	I0408 12:52:47.528296  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:52:47.692054  433674 system_pods.go:59] 9 kube-system pods found
	I0408 12:52:47.692091  433674 system_pods.go:61] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:47.692096  433674 system_pods.go:61] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:47.692101  433674 system_pods.go:61] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:47.692105  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:47.692109  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:47.692112  433674 system_pods.go:61] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:47.692116  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:47.692123  433674 system_pods.go:61] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:47.692129  433674 system_pods.go:61] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:47.692137  433674 system_pods.go:74] duration metric: took 163.833038ms to wait for pod list to return data ...
	I0408 12:52:47.692146  433674 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:52:47.886668  433674 default_sa.go:45] found service account: "default"
	I0408 12:52:47.886695  433674 default_sa.go:55] duration metric: took 194.543392ms for default service account to be created ...
	I0408 12:52:47.886707  433674 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:52:48.090174  433674 system_pods.go:86] 9 kube-system pods found
	I0408 12:52:48.090212  433674 system_pods.go:89] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:48.090217  433674 system_pods.go:89] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:48.090222  433674 system_pods.go:89] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:48.090226  433674 system_pods.go:89] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:48.090232  433674 system_pods.go:89] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:48.090236  433674 system_pods.go:89] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:48.090240  433674 system_pods.go:89] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:48.090248  433674 system_pods.go:89] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:48.090253  433674 system_pods.go:89] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:48.090260  433674 system_pods.go:126] duration metric: took 203.547421ms to wait for k8s-apps to be running ...
	I0408 12:52:48.090266  433674 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:52:48.090312  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:48.106285  433674 system_svc.go:56] duration metric: took 15.998172ms WaitForService to wait for kubelet
	I0408 12:52:48.106322  433674 kubeadm.go:576] duration metric: took 3.303579521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:52:48.106345  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:52:48.287351  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:52:48.287381  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:52:48.287392  433674 node_conditions.go:105] duration metric: took 181.042972ms to run NodePressure ...
	I0408 12:52:48.287403  433674 start.go:240] waiting for startup goroutines ...
	I0408 12:52:48.287410  433674 start.go:245] waiting for cluster config update ...
	I0408 12:52:48.287419  433674 start.go:254] writing updated cluster config ...
	I0408 12:52:48.287738  433674 ssh_runner.go:195] Run: rm -f paused
	I0408 12:52:48.341532  433674 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:52:48.343890  433674 out.go:177] * Done! kubectl is now configured to use "embed-certs-488947" cluster and "default" namespace by default
	I0408 12:52:57.475303  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.552015668s)
	I0408 12:52:57.475390  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:57.492800  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:57.507211  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:57.520174  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:57.520203  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:57.520267  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:52:57.531854  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:57.531939  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:57.543764  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:52:57.555407  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:57.555479  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:57.569452  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.580478  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:57.580575  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.591819  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:52:57.602496  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:57.602589  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:57.613811  433439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:57.669998  433439 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:57.670137  433439 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:57.830674  433439 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:57.830802  433439 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:57.830882  433439 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:58.090382  433439 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:58.092626  433439 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:58.092733  433439 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:58.092809  433439 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:58.092906  433439 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:58.093027  433439 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:58.093130  433439 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:58.093202  433439 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:58.093547  433439 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:58.093941  433439 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:58.094342  433439 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:58.094708  433439 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:58.095077  433439 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:58.095159  433439 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:58.328890  433439 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:58.516475  433439 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:58.830765  433439 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:59.052737  433439 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:59.306668  433439 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:59.307305  433439 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:59.312102  433439 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:59.314983  433439 out.go:204]   - Booting up control plane ...
	I0408 12:52:59.315104  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:59.315191  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:59.315305  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:59.334624  433439 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:59.335637  433439 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:59.335713  433439 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:59.486408  433439 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:05.490227  433439 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002996 seconds
	I0408 12:53:05.526221  433439 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:53:05.553758  433439 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:53:06.101116  433439 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:53:06.101340  433439 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-527454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:53:06.616939  433439 kubeadm.go:309] [bootstrap-token] Using token: oe56hb.uz3a0dd96vnry1w3
	I0408 12:53:06.618840  433439 out.go:204]   - Configuring RBAC rules ...
	I0408 12:53:06.619038  433439 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:53:06.625364  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:53:06.638696  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:53:06.643811  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:53:06.647895  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:53:06.651857  433439 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:53:06.677056  433439 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:53:06.939588  433439 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:53:07.038633  433439 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:53:07.041464  433439 kubeadm.go:309] 
	I0408 12:53:07.041565  433439 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:53:07.041578  433439 kubeadm.go:309] 
	I0408 12:53:07.041680  433439 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:53:07.041699  433439 kubeadm.go:309] 
	I0408 12:53:07.041723  433439 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:53:07.041824  433439 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:53:07.041906  433439 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:53:07.041917  433439 kubeadm.go:309] 
	I0408 12:53:07.041988  433439 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:53:07.041998  433439 kubeadm.go:309] 
	I0408 12:53:07.042103  433439 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:53:07.042123  433439 kubeadm.go:309] 
	I0408 12:53:07.042168  433439 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:53:07.042253  433439 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:53:07.042351  433439 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:53:07.042361  433439 kubeadm.go:309] 
	I0408 12:53:07.042588  433439 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:53:07.042708  433439 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:53:07.042719  433439 kubeadm.go:309] 
	I0408 12:53:07.042823  433439 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.042959  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:53:07.042994  433439 kubeadm.go:309] 	--control-plane 
	I0408 12:53:07.043003  433439 kubeadm.go:309] 
	I0408 12:53:07.043127  433439 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:53:07.043143  433439 kubeadm.go:309] 
	I0408 12:53:07.043253  433439 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.043400  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:53:07.043583  433439 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:53:07.043608  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:53:07.043620  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:53:07.045283  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:53:07.046614  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:53:07.074907  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:53:07.107168  433439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:53:07.107232  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.107256  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-527454 minikube.k8s.io/updated_at=2024_04_08T12_53_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=default-k8s-diff-port-527454 minikube.k8s.io/primary=true
	I0408 12:53:07.208551  433439 ops.go:34] apiserver oom_adj: -16
	I0408 12:53:07.395206  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.896090  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.396097  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.896240  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.395654  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.895751  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.396242  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.896204  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.395766  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.895555  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.396014  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.896092  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.395507  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.895832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.395237  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.895333  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.396191  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.895561  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.395832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.895785  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.395460  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.895320  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.395826  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.896002  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.396326  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.514796  433439 kubeadm.go:1107] duration metric: took 12.407623504s to wait for elevateKubeSystemPrivileges
	W0408 12:53:19.514843  433439 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:53:19.514856  433439 kubeadm.go:393] duration metric: took 5m9.134867072s to StartCluster
	I0408 12:53:19.514882  433439 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.514981  433439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:53:19.516708  433439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.516988  433439 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:53:19.518597  433439 out.go:177] * Verifying Kubernetes components...
	I0408 12:53:19.517057  433439 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:53:19.517238  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:53:19.519990  433439 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520011  433439 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:19.520003  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0408 12:53:19.520052  433439 addons.go:243] addon metrics-server should already be in state true
	I0408 12:53:19.520095  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.519995  433439 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520161  433439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.520247  433439 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:53:19.520274  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.520519  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520521  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520555  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520616  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520639  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520556  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.536637  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0408 12:53:19.536896  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0408 12:53:19.536997  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0408 12:53:19.537194  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537369  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537453  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537748  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537772  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.537883  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537895  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538210  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538262  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538352  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.538372  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538815  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.538818  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538875  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.539030  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.542211  433439 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.542228  433439 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:53:19.542252  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.542841  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.542871  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.556920  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0408 12:53:19.557552  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0408 12:53:19.557712  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.557930  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.558468  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.558482  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.559174  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.559474  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.559852  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.559881  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.560358  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.561323  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.561357  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.561606  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.563808  433439 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:53:19.565205  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:53:19.565224  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:53:19.565252  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.565914  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0408 12:53:19.566710  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.567503  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.567521  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.568270  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.568656  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.568664  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.569109  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.569136  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.569294  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.569451  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.569707  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.569894  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.570455  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.572243  433439 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:53:19.573764  433439 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:19.573784  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:53:19.573804  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.576844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577310  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.577380  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577547  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.577851  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.578009  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.578154  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.579402  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0408 12:53:19.579860  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.580428  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.580448  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.581001  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.581202  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.582638  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.582913  433439 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:19.582929  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:53:19.582949  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.585995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586456  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.586488  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586665  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.586845  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.586974  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.587077  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.782606  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:53:19.822413  433439 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833467  433439 node_ready.go:49] node "default-k8s-diff-port-527454" has status "Ready":"True"
	I0408 12:53:19.833493  433439 node_ready.go:38] duration metric: took 11.040127ms for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833503  433439 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:19.845052  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:19.990826  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:20.027800  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:53:20.027827  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:53:20.066661  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:20.168240  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:53:20.168271  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:53:20.327307  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.327336  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:53:20.390128  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.455235  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455265  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455575  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455607  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.455618  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455628  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455912  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455929  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.494751  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.494778  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.495103  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.495126  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.495132  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.454862  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.388156991s)
	I0408 12:53:21.454942  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.454956  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455313  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.455368  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455377  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455386  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.455395  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455729  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455753  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455797  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.591677  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.201496165s)
	I0408 12:53:21.591745  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.591760  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.592145  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592183  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592199  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.592214  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592484  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592501  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592513  433439 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:21.594462  433439 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0408 12:53:21.595731  433439 addons.go:505] duration metric: took 2.078676652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0408 12:53:21.852741  433439 pod_ready.go:102] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"False"
	I0408 12:53:22.375241  433439 pod_ready.go:92] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.375283  433439 pod_ready.go:81] duration metric: took 2.53020032s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.375298  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.391968  433439 pod_ready.go:92] pod "coredns-76f75df574-z56lf" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.392003  433439 pod_ready.go:81] duration metric: took 16.695581ms for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.392018  433439 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398659  433439 pod_ready.go:92] pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.398688  433439 pod_ready.go:81] duration metric: took 6.657546ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398699  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407214  433439 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.407241  433439 pod_ready.go:81] duration metric: took 8.535246ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407252  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416605  433439 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.416632  433439 pod_ready.go:81] duration metric: took 9.374648ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416644  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750191  433439 pod_ready.go:92] pod "kube-proxy-tlhff" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.750220  433439 pod_ready.go:81] duration metric: took 333.570363ms for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750231  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.148980  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:23.149009  433439 pod_ready.go:81] duration metric: took 398.771226ms for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.149018  433439 pod_ready.go:38] duration metric: took 3.315505787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:23.149034  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:53:23.149087  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:53:23.165120  433439 api_server.go:72] duration metric: took 3.648094543s to wait for apiserver process to appear ...
	I0408 12:53:23.165149  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:53:23.165170  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:53:23.171016  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:53:23.172486  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:53:23.172510  433439 api_server.go:131] duration metric: took 7.354957ms to wait for apiserver health ...
	I0408 12:53:23.172518  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:53:23.353807  433439 system_pods.go:59] 9 kube-system pods found
	I0408 12:53:23.353846  433439 system_pods.go:61] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.353853  433439 system_pods.go:61] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.353859  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.353866  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.353874  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.353879  433439 system_pods.go:61] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.353883  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.353890  433439 system_pods.go:61] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.353896  433439 system_pods.go:61] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.353911  433439 system_pods.go:74] duration metric: took 181.386053ms to wait for pod list to return data ...
	I0408 12:53:23.353923  433439 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:53:23.549663  433439 default_sa.go:45] found service account: "default"
	I0408 12:53:23.549702  433439 default_sa.go:55] duration metric: took 195.766529ms for default service account to be created ...
	I0408 12:53:23.549717  433439 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:53:23.755668  433439 system_pods.go:86] 9 kube-system pods found
	I0408 12:53:23.755729  433439 system_pods.go:89] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.755739  433439 system_pods.go:89] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.755748  433439 system_pods.go:89] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.755755  433439 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.755761  433439 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.755768  433439 system_pods.go:89] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.755774  433439 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.755787  433439 system_pods.go:89] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.755792  433439 system_pods.go:89] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.755805  433439 system_pods.go:126] duration metric: took 206.081481ms to wait for k8s-apps to be running ...
	I0408 12:53:23.755814  433439 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:53:23.755866  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:23.774910  433439 system_svc.go:56] duration metric: took 19.080727ms WaitForService to wait for kubelet
	I0408 12:53:23.774954  433439 kubeadm.go:576] duration metric: took 4.257931558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:53:23.774985  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:53:23.949588  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:53:23.949618  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:53:23.949630  433439 node_conditions.go:105] duration metric: took 174.638826ms to run NodePressure ...
	I0408 12:53:23.949642  433439 start.go:240] waiting for startup goroutines ...
	I0408 12:53:23.949649  433439 start.go:245] waiting for cluster config update ...
	I0408 12:53:23.949659  433439 start.go:254] writing updated cluster config ...
	I0408 12:53:23.949929  433439 ssh_runner.go:195] Run: rm -f paused
	I0408 12:53:24.004633  433439 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:53:24.007640  433439 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-527454" cluster and "default" namespace by default
	I0408 12:53:50.506496  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:53:50.506736  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:53:50.508871  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:50.508975  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:50.509090  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:50.509248  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:50.509435  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:50.509546  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:50.511505  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:50.511616  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:50.511727  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:50.511838  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:50.511925  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:50.512024  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:50.512112  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:50.512228  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:50.512332  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:50.512442  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:50.512551  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:50.512608  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:50.512661  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:50.512714  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:50.512784  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:50.512866  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:50.512934  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:50.513078  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:50.513228  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:50.513285  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:50.513383  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:50.515207  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:50.515297  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:50.515380  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:50.515449  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:50.515522  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:50.515668  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:50.515756  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:53:50.515843  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516036  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516118  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516346  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516428  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516675  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516747  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516990  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517092  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.517336  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517352  433881 kubeadm.go:309] 
	I0408 12:53:50.517402  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:53:50.517453  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:53:50.517463  433881 kubeadm.go:309] 
	I0408 12:53:50.517517  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:53:50.517572  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:53:50.517743  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:53:50.517757  433881 kubeadm.go:309] 
	I0408 12:53:50.517898  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:53:50.517949  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:53:50.517999  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:53:50.518014  433881 kubeadm.go:309] 
	I0408 12:53:50.518163  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:53:50.518286  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:53:50.518297  433881 kubeadm.go:309] 
	I0408 12:53:50.518448  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:53:50.518581  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:53:50.518686  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:53:50.518747  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:53:50.518781  433881 kubeadm.go:309] 
	W0408 12:53:50.518884  433881 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:53:50.518933  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:53:50.995302  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:51.011982  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:53:51.022491  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:53:51.022512  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:53:51.022565  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:53:51.032994  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:53:51.033071  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:53:51.043529  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:53:51.053500  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:53:51.053580  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:53:51.063658  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.073397  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:53:51.073464  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.085243  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:53:51.095094  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:53:51.095165  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:53:51.105549  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:53:51.185596  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:51.185706  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:51.349502  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:51.349661  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:51.349805  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:51.557584  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:51.559567  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:51.559701  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:51.559800  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:51.559968  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:51.560065  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:51.560159  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:51.560241  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:51.560337  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:51.560443  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:51.560561  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:51.560680  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:51.560735  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:51.560826  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:51.727630  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:51.895665  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:52.087304  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:52.187789  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:52.213627  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:52.213777  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:52.213837  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:52.384599  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:52.386843  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:52.386992  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:52.389989  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:52.393527  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:52.394471  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:52.405071  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:54:32.408240  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:54:32.408440  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:32.408738  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:37.409255  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:37.409493  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:47.409946  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:47.410234  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:07.410503  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:07.410710  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.409536  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:47.410032  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.410062  433881 kubeadm.go:309] 
	I0408 12:55:47.410118  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:55:47.410216  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:55:47.410232  433881 kubeadm.go:309] 
	I0408 12:55:47.410278  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:55:47.410341  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:55:47.410503  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:55:47.410515  433881 kubeadm.go:309] 
	I0408 12:55:47.410691  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:55:47.410768  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:55:47.410833  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:55:47.410843  433881 kubeadm.go:309] 
	I0408 12:55:47.411002  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:55:47.411092  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:55:47.411099  433881 kubeadm.go:309] 
	I0408 12:55:47.411208  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:55:47.411325  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:55:47.411415  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:55:47.411523  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:55:47.411534  433881 kubeadm.go:309] 
	I0408 12:55:47.413655  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:55:47.413779  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:55:47.413887  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:55:47.414099  433881 kubeadm.go:393] duration metric: took 7m58.347147979s to StartCluster
	I0408 12:55:47.414206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:55:47.414540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:55:47.466864  433881 cri.go:89] found id: ""
	I0408 12:55:47.466899  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.466909  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:55:47.466917  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:55:47.466999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:55:47.505562  433881 cri.go:89] found id: ""
	I0408 12:55:47.505590  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.505599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:55:47.505606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:55:47.505663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:55:47.545030  433881 cri.go:89] found id: ""
	I0408 12:55:47.545063  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.545075  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:55:47.545086  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:55:47.545158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:55:47.584650  433881 cri.go:89] found id: ""
	I0408 12:55:47.584685  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.584698  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:55:47.584707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:55:47.584775  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:55:47.624857  433881 cri.go:89] found id: ""
	I0408 12:55:47.624885  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.624893  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:55:47.624900  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:55:47.624953  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:55:47.662872  433881 cri.go:89] found id: ""
	I0408 12:55:47.662910  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.662922  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:55:47.662931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:55:47.662999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:55:47.702086  433881 cri.go:89] found id: ""
	I0408 12:55:47.702132  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.702142  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:55:47.702148  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:55:47.702198  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:55:47.754880  433881 cri.go:89] found id: ""
	I0408 12:55:47.754912  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.754922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:55:47.754932  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:55:47.754946  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:55:47.839768  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:55:47.839800  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:55:47.839817  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:55:47.947231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:55:47.947281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:55:47.997692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:55:47.997725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:55:48.050804  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:55:48.050854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0408 12:55:48.067168  433881 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:55:48.067218  433881 out.go:239] * 
	W0408 12:55:48.067277  433881 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.067305  433881 out.go:239] * 
	W0408 12:55:48.068281  433881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:55:48.072609  433881 out.go:177] 
	W0408 12:55:48.074039  433881 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.074112  433881 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:55:48.074174  433881 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:55:48.076570  433881 out.go:177] 
	
	
	==> CRI-O <==
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.023802668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712580950023774882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01358663-6a11-4d21-a78a-cd61fd1c720d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.024437035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7b222f7-00be-4e5f-9a30-17f2e1d67455 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.024604367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7b222f7-00be-4e5f-9a30-17f2e1d67455 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.024671632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7b222f7-00be-4e5f-9a30-17f2e1d67455 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.060690239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28958dcb-9809-4fb0-b028-1b41231d499a name=/runtime.v1.RuntimeService/Version
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.060796764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28958dcb-9809-4fb0-b028-1b41231d499a name=/runtime.v1.RuntimeService/Version
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.062645787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09f797c9-ce7f-4e0d-ac57-142b2ab2d49d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.063043097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712580950063020068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09f797c9-ce7f-4e0d-ac57-142b2ab2d49d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.063768983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64ce755d-e551-4076-8c83-2aa341d297ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.063821779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64ce755d-e551-4076-8c83-2aa341d297ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.063853295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=64ce755d-e551-4076-8c83-2aa341d297ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.101123845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afc5c78d-2fe2-4770-aee8-455043ca6e35 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.101201288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afc5c78d-2fe2-4770-aee8-455043ca6e35 name=/runtime.v1.RuntimeService/Version
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.103163098Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0228b01c-d406-42dc-9810-cf5f27fdd7e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.103637959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712580950103527912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0228b01c-d406-42dc-9810-cf5f27fdd7e7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.104301298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90906294-25ab-4572-8267-fdd4638b3ee2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.104383273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90906294-25ab-4572-8267-fdd4638b3ee2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.104428809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=90906294-25ab-4572-8267-fdd4638b3ee2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.139964218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4adebafa-2f01-44a7-90ec-b18b699b4f9b name=/runtime.v1.RuntimeService/Version
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.140040611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4adebafa-2f01-44a7-90ec-b18b699b4f9b name=/runtime.v1.RuntimeService/Version
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.141701581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a659adeb-c8dd-47ae-8e21-94d144e089d1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.142091913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712580950142067311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a659adeb-c8dd-47ae-8e21-94d144e089d1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.142860993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f80a0b04-3c43-47cb-98d5-a2bdc6117187 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.142937123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f80a0b04-3c43-47cb-98d5-a2bdc6117187 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 12:55:50 old-k8s-version-384148 crio[654]: time="2024-04-08 12:55:50.142976171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f80a0b04-3c43-47cb-98d5-a2bdc6117187 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056085] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.848078] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.261221] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.697800] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.947628] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.074394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060792] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.180910] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.184499] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.345839] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +7.450294] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.068325] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.296156] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Apr 8 12:48] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 8 12:51] systemd-fstab-generator[4953]: Ignoring "noauto" option for root device
	[Apr 8 12:53] systemd-fstab-generator[5230]: Ignoring "noauto" option for root device
	[  +0.074857] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:55:50 up 8 min,  0 users,  load average: 0.07, 0.15, 0.09
	Linux old-k8s-version-384148 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001d5340, 0xc00097a3f0, 0x1, 0x0, 0x0)
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000623340)
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]: goroutine 111 [select]:
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000c70460, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000d90f00, 0x0, 0x0)
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000623340)
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 08 12:55:47 old-k8s-version-384148 kubelet[5409]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 08 12:55:47 old-k8s-version-384148 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 08 12:55:47 old-k8s-version-384148 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 08 12:55:48 old-k8s-version-384148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 08 12:55:48 old-k8s-version-384148 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 08 12:55:48 old-k8s-version-384148 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 08 12:55:48 old-k8s-version-384148 kubelet[5474]: I0408 12:55:48.473931    5474 server.go:416] Version: v1.20.0
	Apr 08 12:55:48 old-k8s-version-384148 kubelet[5474]: I0408 12:55:48.474372    5474 server.go:837] Client rotation is on, will bootstrap in background
	Apr 08 12:55:48 old-k8s-version-384148 kubelet[5474]: I0408 12:55:48.476908    5474 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 08 12:55:48 old-k8s-version-384148 kubelet[5474]: I0408 12:55:48.478194    5474 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 08 12:55:48 old-k8s-version-384148 kubelet[5474]: W0408 12:55:48.478238    5474 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 2 (254.578432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-384148" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (800.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0408 12:52:41.604158  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-135234 -n no-preload-135234
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-08 13:00:54.287446947 +0000 UTC m=+6032.074129153
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-135234 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-135234 logs -n 25: (2.205001036s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo cat                                               |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo containerd config dump                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl status crio                             |                              |         |                |                     |                     |
	|         | --all --full --no-pager                                |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl cat crio                                |                              |         |                |                     |                     |
	|         | --no-pager                                             |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |                |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |                |                     |                     |
	|         | \;                                                     |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo crio config                                       |                              |         |                |                     |                     |
	| delete  | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-122490 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | disable-driver-mounts-122490                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:42:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:42:31.610028  433881 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:42:31.610291  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610300  433881 out.go:304] Setting ErrFile to fd 2...
	I0408 12:42:31.610304  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610590  433881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:42:31.611834  433881 out.go:298] Setting JSON to false
	I0408 12:42:31.613323  433881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8695,"bootTime":1712571457,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:42:31.613413  433881 start.go:139] virtualization: kvm guest
	I0408 12:42:31.615441  433881 out.go:177] * [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:42:31.617429  433881 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:42:31.617459  433881 notify.go:220] Checking for updates...
	I0408 12:42:31.618918  433881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:42:31.620434  433881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:42:31.621883  433881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:42:31.623381  433881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:42:31.624858  433881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:42:31.626731  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:42:31.627141  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.627193  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.642980  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0408 12:42:31.643395  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.644144  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.644166  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.644557  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.644768  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.646980  433881 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 12:42:31.648378  433881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:42:31.648694  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.648732  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.663924  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0408 12:42:31.664361  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.664884  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.664910  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.665218  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.665445  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.701652  433881 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:42:31.703025  433881 start.go:297] selected driver: kvm2
	I0408 12:42:31.703041  433881 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.703192  433881 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:42:31.703924  433881 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.704018  433881 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:42:31.719599  433881 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:42:31.720001  433881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:42:31.720084  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:42:31.720102  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:42:31.720156  433881 start.go:340] cluster config:
	{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.720330  433881 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.722299  433881 out.go:177] * Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	I0408 12:42:31.723540  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:42:31.723577  433881 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:42:31.723594  433881 cache.go:56] Caching tarball of preloaded images
	I0408 12:42:31.723718  433881 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:42:31.723733  433881 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:42:31.723846  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:42:31.724039  433881 start.go:360] acquireMachinesLock for old-k8s-version-384148: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:42:32.207974  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:38.288048  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:41.359947  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:47.439972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:50.512009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:56.591982  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:59.664002  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:05.744032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:08.816017  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:14.895990  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:17.967942  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:24.048010  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:27.119964  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:33.200067  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:36.272037  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:42.351972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:45.424082  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:51.503992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:54.576088  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:00.656001  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:03.728079  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:09.807949  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:12.880051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:18.960024  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:22.032036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:28.112053  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:31.183992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:37.264032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:40.336026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:46.416019  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:49.487998  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:55.568026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:58.640044  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:04.719978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:07.792028  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:13.871997  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:16.944057  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:23.023969  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:26.096051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:32.176049  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:35.247929  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:41.328036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:44.399954  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:50.480046  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:53.552034  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:59.632009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:02.704063  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:08.784031  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:11.856098  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:17.936013  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:21.007970  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:27.087978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:30.159984  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:36.240042  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:39.245220  433557 start.go:364] duration metric: took 4m33.298555643s to acquireMachinesLock for "no-preload-135234"
	I0408 12:46:39.245298  433557 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:39.245311  433557 fix.go:54] fixHost starting: 
	I0408 12:46:39.245782  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:39.245821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:39.261035  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
	I0408 12:46:39.261632  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:39.262208  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:46:39.262234  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:39.262592  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:39.262819  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:39.262938  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:46:39.264995  433557 fix.go:112] recreateIfNeeded on no-preload-135234: state=Stopped err=<nil>
	I0408 12:46:39.265029  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	W0408 12:46:39.265203  433557 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:39.266971  433557 out.go:177] * Restarting existing kvm2 VM for "no-preload-135234" ...
	I0408 12:46:39.268140  433557 main.go:141] libmachine: (no-preload-135234) Calling .Start
	I0408 12:46:39.268315  433557 main.go:141] libmachine: (no-preload-135234) Ensuring networks are active...
	I0408 12:46:39.269323  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network default is active
	I0408 12:46:39.269669  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network mk-no-preload-135234 is active
	I0408 12:46:39.270047  433557 main.go:141] libmachine: (no-preload-135234) Getting domain xml...
	I0408 12:46:39.270763  433557 main.go:141] libmachine: (no-preload-135234) Creating domain...
	I0408 12:46:40.496145  433557 main.go:141] libmachine: (no-preload-135234) Waiting to get IP...
	I0408 12:46:40.497357  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.497870  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.497950  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.497853  434768 retry.go:31] will retry after 305.764185ms: waiting for machine to come up
	I0408 12:46:40.805894  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.806351  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.806380  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.806304  434768 retry.go:31] will retry after 359.02584ms: waiting for machine to come up
	I0408 12:46:39.242442  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:39.242498  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.242871  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:46:39.242935  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.243206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:46:39.245063  433439 machine.go:97] duration metric: took 4m37.367683512s to provisionDockerMachine
	I0408 12:46:39.245112  433439 fix.go:56] duration metric: took 4m37.391017413s for fixHost
	I0408 12:46:39.245118  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 4m37.391040241s
	W0408 12:46:39.245140  433439 start.go:713] error starting host: provision: host is not running
	W0408 12:46:39.245388  433439 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0408 12:46:39.245401  433439 start.go:728] Will try again in 5 seconds ...
	I0408 12:46:41.167272  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.167748  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.167779  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.167702  434768 retry.go:31] will retry after 412.762727ms: waiting for machine to come up
	I0408 12:46:41.582454  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.582959  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.582990  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.582904  434768 retry.go:31] will retry after 572.486121ms: waiting for machine to come up
	I0408 12:46:42.156830  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.157270  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.157294  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.157243  434768 retry.go:31] will retry after 706.130574ms: waiting for machine to come up
	I0408 12:46:42.865325  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.865829  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.865863  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.865762  434768 retry.go:31] will retry after 901.114252ms: waiting for machine to come up
	I0408 12:46:43.768578  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:43.769067  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:43.769103  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:43.769032  434768 retry.go:31] will retry after 1.160836088s: waiting for machine to come up
	I0408 12:46:44.931002  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:44.931408  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:44.931438  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:44.931349  434768 retry.go:31] will retry after 998.940623ms: waiting for machine to come up
	I0408 12:46:44.247774  433439 start.go:360] acquireMachinesLock for default-k8s-diff-port-527454: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:46:45.931728  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:45.932157  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:45.932241  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:45.932115  434768 retry.go:31] will retry after 1.43975568s: waiting for machine to come up
	I0408 12:46:47.373294  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:47.373786  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:47.373821  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:47.373733  434768 retry.go:31] will retry after 1.828434336s: waiting for machine to come up
	I0408 12:46:49.205019  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:49.205414  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:49.205462  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:49.205376  434768 retry.go:31] will retry after 2.847051956s: waiting for machine to come up
	I0408 12:46:52.055004  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:52.055561  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:52.055586  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:52.055517  434768 retry.go:31] will retry after 2.941262871s: waiting for machine to come up
	I0408 12:46:54.998158  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:54.998598  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:54.998631  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:54.998542  434768 retry.go:31] will retry after 3.082026915s: waiting for machine to come up
	I0408 12:46:59.561049  433674 start.go:364] duration metric: took 4m43.922045129s to acquireMachinesLock for "embed-certs-488947"
	I0408 12:46:59.561130  433674 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:59.561140  433674 fix.go:54] fixHost starting: 
	I0408 12:46:59.561636  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:59.561683  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:59.578117  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I0408 12:46:59.578573  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:59.579047  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:46:59.579074  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:59.579432  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:59.579633  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:46:59.579852  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:46:59.581445  433674 fix.go:112] recreateIfNeeded on embed-certs-488947: state=Stopped err=<nil>
	I0408 12:46:59.581492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	W0408 12:46:59.581667  433674 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:59.584306  433674 out.go:177] * Restarting existing kvm2 VM for "embed-certs-488947" ...
	I0408 12:46:59.585750  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Start
	I0408 12:46:59.585971  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring networks are active...
	I0408 12:46:59.586749  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network default is active
	I0408 12:46:59.587136  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network mk-embed-certs-488947 is active
	I0408 12:46:59.587551  433674 main.go:141] libmachine: (embed-certs-488947) Getting domain xml...
	I0408 12:46:59.588302  433674 main.go:141] libmachine: (embed-certs-488947) Creating domain...
	I0408 12:46:58.084025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084608  433557 main.go:141] libmachine: (no-preload-135234) Found IP for machine: 192.168.61.48
	I0408 12:46:58.084660  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has current primary IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084668  433557 main.go:141] libmachine: (no-preload-135234) Reserving static IP address...
	I0408 12:46:58.085160  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.085198  433557 main.go:141] libmachine: (no-preload-135234) Reserved static IP address: 192.168.61.48
	I0408 12:46:58.085213  433557 main.go:141] libmachine: (no-preload-135234) DBG | skip adding static IP to network mk-no-preload-135234 - found existing host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"}
	I0408 12:46:58.085229  433557 main.go:141] libmachine: (no-preload-135234) DBG | Getting to WaitForSSH function...
	I0408 12:46:58.085240  433557 main.go:141] libmachine: (no-preload-135234) Waiting for SSH to be available...
	I0408 12:46:58.087595  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.087990  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.088025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.088155  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH client type: external
	I0408 12:46:58.088178  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa (-rw-------)
	I0408 12:46:58.088210  433557 main.go:141] libmachine: (no-preload-135234) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:46:58.088228  433557 main.go:141] libmachine: (no-preload-135234) DBG | About to run SSH command:
	I0408 12:46:58.088241  433557 main.go:141] libmachine: (no-preload-135234) DBG | exit 0
	I0408 12:46:58.220043  433557 main.go:141] libmachine: (no-preload-135234) DBG | SSH cmd err, output: <nil>: 
	I0408 12:46:58.220440  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetConfigRaw
	I0408 12:46:58.221216  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.223881  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224184  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.224202  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224597  433557 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/config.json ...
	I0408 12:46:58.224804  433557 machine.go:94] provisionDockerMachine start ...
	I0408 12:46:58.224828  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:58.225070  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.227668  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228048  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.228080  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228242  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.228438  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228647  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228780  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.228941  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.229238  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.229253  433557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:46:58.344562  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:46:58.344602  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.344888  433557 buildroot.go:166] provisioning hostname "no-preload-135234"
	I0408 12:46:58.344922  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.345147  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.347895  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348278  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.348311  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348433  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.348638  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348801  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348911  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.349077  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.349289  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.349303  433557 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-135234 && echo "no-preload-135234" | sudo tee /etc/hostname
	I0408 12:46:58.478959  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-135234
	
	I0408 12:46:58.478996  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.481692  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482164  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.482187  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482410  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.482643  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.482851  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.483032  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.483230  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.483446  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.483465  433557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-135234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-135234/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-135234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:46:58.606022  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:58.606059  433557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:46:58.606080  433557 buildroot.go:174] setting up certificates
	I0408 12:46:58.606092  433557 provision.go:84] configureAuth start
	I0408 12:46:58.606108  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.606465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.609605  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610046  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.610079  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.612452  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612756  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.612784  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612905  433557 provision.go:143] copyHostCerts
	I0408 12:46:58.612974  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:46:58.613029  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:46:58.613097  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:46:58.613200  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:46:58.613209  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:46:58.613232  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:46:58.613295  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:46:58.613302  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:46:58.613323  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:46:58.613438  433557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.no-preload-135234 san=[127.0.0.1 192.168.61.48 localhost minikube no-preload-135234]
	I0408 12:46:58.832264  433557 provision.go:177] copyRemoteCerts
	I0408 12:46:58.832335  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:46:58.832382  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.835259  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835609  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.835650  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835883  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.836158  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.836332  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.836468  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:58.922968  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:46:58.949601  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 12:46:58.976832  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:46:59.004643  433557 provision.go:87] duration metric: took 398.533019ms to configureAuth
	I0408 12:46:59.004683  433557 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:46:59.004885  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:46:59.004988  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.008264  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008735  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.008783  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008987  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.009238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009416  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009542  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.009680  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.009866  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.009884  433557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:46:59.299880  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:46:59.299912  433557 machine.go:97] duration metric: took 1.075094362s to provisionDockerMachine
	I0408 12:46:59.299925  433557 start.go:293] postStartSetup for "no-preload-135234" (driver="kvm2")
	I0408 12:46:59.299940  433557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:46:59.299981  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.300373  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:46:59.300406  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.303274  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303769  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.303806  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303941  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.304222  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.304575  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.304874  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.395808  433557 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:46:59.400795  433557 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:46:59.400831  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:46:59.400914  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:46:59.401021  433557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:46:59.401162  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:46:59.411883  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:46:59.438486  433557 start.go:296] duration metric: took 138.54299ms for postStartSetup
	I0408 12:46:59.438546  433557 fix.go:56] duration metric: took 20.19323532s for fixHost
	I0408 12:46:59.438577  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.441875  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442334  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.442366  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442528  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.442753  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.442969  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.443101  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.443232  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.443414  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.443424  433557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:46:59.560853  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580419.531854515
	
	I0408 12:46:59.560881  433557 fix.go:216] guest clock: 1712580419.531854515
	I0408 12:46:59.560891  433557 fix.go:229] Guest: 2024-04-08 12:46:59.531854515 +0000 UTC Remote: 2024-04-08 12:46:59.438552641 +0000 UTC m=+293.653384531 (delta=93.301874ms)
	I0408 12:46:59.560918  433557 fix.go:200] guest clock delta is within tolerance: 93.301874ms
	I0408 12:46:59.560929  433557 start.go:83] releasing machines lock for "no-preload-135234", held for 20.315655744s
	I0408 12:46:59.560965  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.561244  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:59.564248  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564623  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.564658  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564758  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565245  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565434  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565524  433557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:46:59.565571  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.565726  433557 ssh_runner.go:195] Run: cat /version.json
	I0408 12:46:59.565752  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.568339  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568729  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568766  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.568789  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568931  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569139  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569201  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.569227  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.569300  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569392  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569486  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569647  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.569782  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569900  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.689264  433557 ssh_runner.go:195] Run: systemctl --version
	I0408 12:46:59.695704  433557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:46:59.848323  433557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:46:59.856068  433557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:46:59.856171  433557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:46:59.877460  433557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:46:59.877490  433557 start.go:494] detecting cgroup driver to use...
	I0408 12:46:59.877557  433557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:46:59.895329  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:46:59.910849  433557 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:46:59.910908  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:46:59.925541  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:46:59.941511  433557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:00.064454  433557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:00.218535  433557 docker.go:233] disabling docker service ...
	I0408 12:47:00.218614  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:00.234510  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:00.249703  433557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:00.403556  433557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:00.569324  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:00.585058  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:00.607536  433557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:00.607592  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.624701  433557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:00.624774  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.637414  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.649846  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.662725  433557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:00.675738  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.688667  433557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.710326  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.722619  433557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:00.734130  433557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:00.734227  433557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:00.749998  433557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:00.761556  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:00.881544  433557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:01.036952  433557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:01.037040  433557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:01.042260  433557 start.go:562] Will wait 60s for crictl version
	I0408 12:47:01.042329  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.046327  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:01.092359  433557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:01.092465  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.127373  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.165027  433557 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0408 12:47:00.888196  433674 main.go:141] libmachine: (embed-certs-488947) Waiting to get IP...
	I0408 12:47:00.889196  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:00.889766  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:00.889808  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:00.889702  434916 retry.go:31] will retry after 239.282192ms: waiting for machine to come up
	I0408 12:47:01.130508  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.131075  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.131111  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.131016  434916 retry.go:31] will retry after 388.837258ms: waiting for machine to come up
	I0408 12:47:01.522006  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.522413  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.522444  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.522364  434916 retry.go:31] will retry after 372.310428ms: waiting for machine to come up
	I0408 12:47:01.896325  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.896919  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.896954  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.896851  434916 retry.go:31] will retry after 574.930775ms: waiting for machine to come up
	I0408 12:47:02.474045  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.474626  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.474664  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.474557  434916 retry.go:31] will retry after 506.414729ms: waiting for machine to come up
	I0408 12:47:02.982589  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.983203  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.983238  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.983135  434916 retry.go:31] will retry after 614.351996ms: waiting for machine to come up
	I0408 12:47:03.599165  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:03.599682  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:03.599724  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:03.599640  434916 retry.go:31] will retry after 1.130025801s: waiting for machine to come up
	I0408 12:47:04.731350  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:04.731841  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:04.731874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:04.731791  434916 retry.go:31] will retry after 1.346613974s: waiting for machine to come up
	I0408 12:47:01.166849  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:47:01.169772  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170183  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:01.170211  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170523  433557 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:01.175336  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:01.193759  433557 kubeadm.go:877] updating cluster {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:01.193949  433557 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 12:47:01.194017  433557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:01.234439  433557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0408 12:47:01.234466  433557 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:01.234547  433557 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.234575  433557 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.234589  433557 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.234625  433557 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0408 12:47:01.234576  433557 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.234562  433557 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.234696  433557 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.234554  433557 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.236654  433557 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.236678  433557 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.236701  433557 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0408 12:47:01.236686  433557 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.236630  433557 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236789  433557 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.475737  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.476344  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.482596  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.486680  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.490012  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.496685  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0408 12:47:01.510269  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.597119  433557 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0408 12:47:01.597179  433557 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.597238  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696018  433557 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0408 12:47:01.696123  433557 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.696148  433557 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0408 12:47:01.696196  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696201  433557 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.696237  433557 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0408 12:47:01.696254  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696265  433557 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.696299  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.710260  433557 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0408 12:47:01.710317  433557 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.710369  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799524  433557 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0408 12:47:01.799583  433557 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.799592  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.799616  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.799626  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.799618  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799679  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.799734  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.916654  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0408 12:47:01.916701  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.916783  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:01.916809  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.923863  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.923904  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.923974  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.924021  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.924065  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924176  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924067  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.926651  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0408 12:47:01.926681  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926722  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926783  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0408 12:47:01.974801  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0408 12:47:01.974875  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974939  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:01.974969  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974944  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0408 12:47:02.062944  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.916991  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.990237597s)
	I0408 12:47:04.917016  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.942055075s)
	I0408 12:47:04.917036  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0408 12:47:04.917040  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0408 12:47:04.917047  433557 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917098  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917117  433557 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.854126587s)
	I0408 12:47:04.917187  433557 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0408 12:47:04.917233  433557 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.917278  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:06.080429  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:06.080910  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:06.080942  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:06.080866  434916 retry.go:31] will retry after 1.125692215s: waiting for machine to come up
	I0408 12:47:07.208553  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:07.209015  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:07.209040  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:07.208961  434916 retry.go:31] will retry after 1.958080491s: waiting for machine to come up
	I0408 12:47:09.169878  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:09.170289  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:09.170319  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:09.170243  434916 retry.go:31] will retry after 2.241966019s: waiting for machine to come up
	I0408 12:47:08.833969  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.916836964s)
	I0408 12:47:08.834011  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0408 12:47:08.834029  433557 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834032  433557 ssh_runner.go:235] Completed: which crictl: (3.916731005s)
	I0408 12:47:08.834085  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834101  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:11.414435  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:11.414829  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:11.414851  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:11.414786  434916 retry.go:31] will retry after 2.815941766s: waiting for machine to come up
	I0408 12:47:14.233868  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:14.234272  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:14.234318  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:14.234228  434916 retry.go:31] will retry after 3.213192238s: waiting for machine to come up
	I0408 12:47:10.925471  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091353526s)
	I0408 12:47:10.925519  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0408 12:47:10.925542  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925581  433557 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.091434251s)
	I0408 12:47:10.925612  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925673  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 12:47:10.925782  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:12.405175  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.479529413s)
	I0408 12:47:12.405221  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0408 12:47:12.405238  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:12.405236  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.479424271s)
	I0408 12:47:12.405270  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0408 12:47:12.405296  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:14.283021  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (1.877693108s)
	I0408 12:47:14.283061  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0408 12:47:14.283079  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:14.283143  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:18.781552  433881 start.go:364] duration metric: took 4m47.057472647s to acquireMachinesLock for "old-k8s-version-384148"
	I0408 12:47:18.781636  433881 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:18.781645  433881 fix.go:54] fixHost starting: 
	I0408 12:47:18.782123  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:18.782168  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:18.804263  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0408 12:47:18.804759  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:18.805376  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:47:18.805407  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:18.805815  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:18.806091  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:18.806265  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetState
	I0408 12:47:18.809884  433881 fix.go:112] recreateIfNeeded on old-k8s-version-384148: state=Stopped err=<nil>
	I0408 12:47:18.809915  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	W0408 12:47:18.810103  433881 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:18.812906  433881 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-384148" ...
	I0408 12:47:17.451190  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451657  433674 main.go:141] libmachine: (embed-certs-488947) Found IP for machine: 192.168.72.159
	I0408 12:47:17.451705  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has current primary IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451725  433674 main.go:141] libmachine: (embed-certs-488947) Reserving static IP address...
	I0408 12:47:17.452192  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.452239  433674 main.go:141] libmachine: (embed-certs-488947) Reserved static IP address: 192.168.72.159
	I0408 12:47:17.452259  433674 main.go:141] libmachine: (embed-certs-488947) DBG | skip adding static IP to network mk-embed-certs-488947 - found existing host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"}
	I0408 12:47:17.452282  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Getting to WaitForSSH function...
	I0408 12:47:17.452297  433674 main.go:141] libmachine: (embed-certs-488947) Waiting for SSH to be available...
	I0408 12:47:17.454780  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455169  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.455208  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH client type: external
	I0408 12:47:17.455354  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa (-rw-------)
	I0408 12:47:17.455384  433674 main.go:141] libmachine: (embed-certs-488947) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:17.455401  433674 main.go:141] libmachine: (embed-certs-488947) DBG | About to run SSH command:
	I0408 12:47:17.455414  433674 main.go:141] libmachine: (embed-certs-488947) DBG | exit 0
	I0408 12:47:17.585037  433674 main.go:141] libmachine: (embed-certs-488947) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:17.585443  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetConfigRaw
	I0408 12:47:17.586184  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.589492  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.589953  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.589985  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.590269  433674 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/config.json ...
	I0408 12:47:17.590518  433674 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:17.590550  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:17.590798  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.593968  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594570  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.594615  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594832  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.595073  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595236  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595442  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.595661  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.595892  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.595905  433674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:17.708468  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:17.708504  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.708857  433674 buildroot.go:166] provisioning hostname "embed-certs-488947"
	I0408 12:47:17.708890  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.709083  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.712242  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712698  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.712732  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712928  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.713122  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713298  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713433  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.713612  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.713801  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.713817  433674 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-488947 && echo "embed-certs-488947" | sudo tee /etc/hostname
	I0408 12:47:17.842964  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-488947
	
	I0408 12:47:17.843017  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.846436  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.846959  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.846992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.847225  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.847486  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847726  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847945  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.848182  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.848373  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.848397  433674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-488947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-488947/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-488947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:17.975087  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:17.975123  433674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:17.975178  433674 buildroot.go:174] setting up certificates
	I0408 12:47:17.975198  433674 provision.go:84] configureAuth start
	I0408 12:47:17.975212  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.975606  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.979028  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979483  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.979510  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979754  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.982474  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.982944  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.982977  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.983174  433674 provision.go:143] copyHostCerts
	I0408 12:47:17.983230  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:17.983240  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:17.983291  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:17.983408  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:17.983419  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:17.983444  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:17.983500  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:17.983507  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:17.983526  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:17.983580  433674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.embed-certs-488947 san=[127.0.0.1 192.168.72.159 embed-certs-488947 localhost minikube]
	I0408 12:47:18.043022  433674 provision.go:177] copyRemoteCerts
	I0408 12:47:18.043092  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:18.043162  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.046335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046722  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.046757  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046904  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.047145  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.047333  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.047475  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.134761  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:18.163745  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 12:47:18.192946  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:18.220790  433674 provision.go:87] duration metric: took 245.573885ms to configureAuth
	I0408 12:47:18.220827  433674 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:18.221067  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:47:18.221175  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.224177  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.224805  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.224839  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.225098  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.225363  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225569  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225797  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.226024  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.226202  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.226219  433674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:18.522682  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:18.522718  433674 machine.go:97] duration metric: took 932.18024ms to provisionDockerMachine
	I0408 12:47:18.522735  433674 start.go:293] postStartSetup for "embed-certs-488947" (driver="kvm2")
	I0408 12:47:18.522750  433674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:18.522776  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.523133  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:18.523174  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.526523  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.526872  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.526903  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.527101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.527336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.527512  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.527692  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.615353  433674 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:18.620414  433674 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:18.620447  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:18.620525  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:18.620627  433674 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:18.620726  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:18.630585  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:18.658952  433674 start.go:296] duration metric: took 136.200863ms for postStartSetup
	I0408 12:47:18.659004  433674 fix.go:56] duration metric: took 19.097863992s for fixHost
	I0408 12:47:18.659037  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.662115  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662571  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.662606  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662843  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.663100  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663308  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663480  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.663676  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.663919  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.663939  433674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:18.781355  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580438.730334929
	
	I0408 12:47:18.781402  433674 fix.go:216] guest clock: 1712580438.730334929
	I0408 12:47:18.781427  433674 fix.go:229] Guest: 2024-04-08 12:47:18.730334929 +0000 UTC Remote: 2024-04-08 12:47:18.659010209 +0000 UTC m=+303.178294166 (delta=71.32472ms)
	I0408 12:47:18.781457  433674 fix.go:200] guest clock delta is within tolerance: 71.32472ms
	I0408 12:47:18.781465  433674 start.go:83] releasing machines lock for "embed-certs-488947", held for 19.22036189s
	I0408 12:47:18.781502  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.781800  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:18.784825  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785270  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.785313  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786104  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786456  433674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:18.786501  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.786626  433674 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:18.786660  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.789409  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789704  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790019  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790149  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790306  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790322  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790338  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790495  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790528  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790745  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.790867  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790997  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.911025  433674 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:18.917785  433674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:19.070383  433674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:19.077521  433674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:19.077606  433674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:19.094598  433674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:19.094636  433674 start.go:494] detecting cgroup driver to use...
	I0408 12:47:19.094750  433674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:19.111163  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:19.125621  433674 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:19.125688  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:19.141948  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:19.156671  433674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:19.281688  433674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:19.455445  433674 docker.go:233] disabling docker service ...
	I0408 12:47:19.455519  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:19.474594  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:19.491301  433674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:19.646063  433674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:19.786075  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:19.803535  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:19.829204  433674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:19.829282  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.842132  433674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:19.842201  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.853915  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.866449  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.879235  433674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:19.899411  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.920363  433674 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.946414  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.958824  433674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:19.969691  433674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:19.969754  433674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:19.986458  433674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:19.998655  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:20.157494  433674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:20.318209  433674 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:20.318287  433674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:20.325414  433674 start.go:562] Will wait 60s for crictl version
	I0408 12:47:20.325490  433674 ssh_runner.go:195] Run: which crictl
	I0408 12:47:20.330070  433674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:20.383808  433674 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:20.383959  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.417705  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.454321  433674 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:47:20.456101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:20.460035  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.460734  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:20.460774  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.461140  433674 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:20.467650  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:20.486936  433674 kubeadm.go:877] updating cluster {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:20.487105  433674 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:47:20.487176  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:20.529152  433674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:47:20.529293  433674 ssh_runner.go:195] Run: which lz4
	I0408 12:47:16.552712  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.26954566s)
	I0408 12:47:16.552781  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0408 12:47:16.552797  433557 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:16.552839  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:17.512103  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 12:47:17.512151  433557 cache_images.go:123] Successfully loaded all cached images
	I0408 12:47:17.512158  433557 cache_images.go:92] duration metric: took 16.277680364s to LoadCachedImages
	I0408 12:47:17.512171  433557 kubeadm.go:928] updating node { 192.168.61.48 8443 v1.30.0-rc.0 crio true true} ...
	I0408 12:47:17.512324  433557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-135234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:17.512440  433557 ssh_runner.go:195] Run: crio config
	I0408 12:47:17.561382  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:17.561424  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:17.561441  433557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:17.561472  433557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-135234 NodeName:no-preload-135234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:17.561681  433557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-135234"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:17.561807  433557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0408 12:47:17.574237  433557 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:17.574321  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:17.587129  433557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0408 12:47:17.609022  433557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0408 12:47:17.629656  433557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0408 12:47:17.650373  433557 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:17.655031  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:17.670872  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:17.811548  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:17.830945  433557 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234 for IP: 192.168.61.48
	I0408 12:47:17.830974  433557 certs.go:194] generating shared ca certs ...
	I0408 12:47:17.831000  433557 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:17.831219  433557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:17.831277  433557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:17.831290  433557 certs.go:256] generating profile certs ...
	I0408 12:47:17.831453  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/client.key
	I0408 12:47:17.831521  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key.dbd08c09
	I0408 12:47:17.831577  433557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key
	I0408 12:47:17.831823  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:17.831891  433557 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:17.831906  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:17.831946  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:17.831978  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:17.832007  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:17.832059  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:17.832899  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:17.869894  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:17.902893  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:17.943547  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:17.990462  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 12:47:18.026697  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:18.055643  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:18.083357  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:47:18.109247  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:18.134513  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:18.161811  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:18.189968  433557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:18.210173  433557 ssh_runner.go:195] Run: openssl version
	I0408 12:47:18.216813  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:18.230693  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236461  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236526  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.244183  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:18.257589  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:18.271235  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277004  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277088  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.283549  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:18.296789  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:18.309587  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314537  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314608  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.320942  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:18.333407  433557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:18.338637  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:18.345365  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:18.352262  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:18.359464  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:18.366233  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:18.373280  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:18.380134  433557 kubeadm.go:391] StartCluster: {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:18.380291  433557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:18.380403  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.423068  433557 cri.go:89] found id: ""
	I0408 12:47:18.423164  433557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:18.435458  433557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:18.435497  433557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:18.435503  433557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:18.435562  433557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:18.447509  433557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:18.448720  433557 kubeconfig.go:125] found "no-preload-135234" server: "https://192.168.61.48:8443"
	I0408 12:47:18.451154  433557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:18.463246  433557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.48
	I0408 12:47:18.463299  433557 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:18.463315  433557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:18.463394  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.522929  433557 cri.go:89] found id: ""
	I0408 12:47:18.523011  433557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:18.546346  433557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:18.558613  433557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:18.558640  433557 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:18.558714  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:18.570020  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:18.570106  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:18.581323  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:18.593718  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:18.593778  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:18.606889  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.619251  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:18.619320  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.632343  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:18.644913  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:18.645004  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:18.656965  433557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:18.670774  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:18.785507  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:19.988135  433557 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.202584017s)
	I0408 12:47:19.988174  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.235430  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.316709  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.456307  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:20.456393  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:18.814842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .Start
	I0408 12:47:18.815096  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring networks are active...
	I0408 12:47:18.816155  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network default is active
	I0408 12:47:18.816608  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network mk-old-k8s-version-384148 is active
	I0408 12:47:18.817061  433881 main.go:141] libmachine: (old-k8s-version-384148) Getting domain xml...
	I0408 12:47:18.817951  433881 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:47:20.144750  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting to get IP...
	I0408 12:47:20.145850  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.146334  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.146403  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.146320  435057 retry.go:31] will retry after 230.92081ms: waiting for machine to come up
	I0408 12:47:20.378905  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.379518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.379572  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.379474  435057 retry.go:31] will retry after 383.208004ms: waiting for machine to come up
	I0408 12:47:20.764287  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.764883  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.764936  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.764858  435057 retry.go:31] will retry after 430.674899ms: waiting for machine to come up
	I0408 12:47:21.197738  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.198231  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.198255  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.198190  435057 retry.go:31] will retry after 553.905508ms: waiting for machine to come up
	I0408 12:47:20.534154  433674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:20.538991  433674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:20.539034  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:47:22.249270  433674 crio.go:462] duration metric: took 1.715182486s to copy over tarball
	I0408 12:47:22.249391  433674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:24.966695  433674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.717265287s)
	I0408 12:47:24.966730  433674 crio.go:469] duration metric: took 2.717416948s to extract the tarball
	I0408 12:47:24.966740  433674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:25.007656  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:25.063445  433674 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:47:25.063482  433674 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:47:25.063494  433674 kubeadm.go:928] updating node { 192.168.72.159 8443 v1.29.3 crio true true} ...
	I0408 12:47:25.063627  433674 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-488947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:25.063745  433674 ssh_runner.go:195] Run: crio config
	I0408 12:47:25.122219  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:25.122282  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:25.122298  433674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:25.122330  433674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-488947 NodeName:embed-certs-488947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:25.122556  433674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-488947"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:25.122633  433674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:47:25.137001  433674 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:25.137148  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:25.151168  433674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0408 12:47:25.171698  433674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:25.195101  433674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0408 12:47:25.216873  433674 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:25.221155  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:25.235740  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:25.354135  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:25.377763  433674 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947 for IP: 192.168.72.159
	I0408 12:47:25.377801  433674 certs.go:194] generating shared ca certs ...
	I0408 12:47:25.377824  433674 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:25.378055  433674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:25.378137  433674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:25.378161  433674 certs.go:256] generating profile certs ...
	I0408 12:47:25.378299  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/client.key
	I0408 12:47:25.378391  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key.21d2a89c
	I0408 12:47:25.378460  433674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key
	I0408 12:47:25.378628  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:25.378687  433674 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:25.378702  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:25.378736  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:25.378780  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:25.378818  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:25.378888  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:25.379800  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:25.422370  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:25.468967  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:25.516750  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:20.956916  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.456948  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.957498  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.982763  433557 api_server.go:72] duration metric: took 1.526450888s to wait for apiserver process to appear ...
	I0408 12:47:21.982797  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:21.982852  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.363696  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.363732  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.363758  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.398003  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.398065  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.483280  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:21.754065  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.754814  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.754849  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.754719  435057 retry.go:31] will retry after 678.896106ms: waiting for machine to come up
	I0408 12:47:22.435899  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:22.436481  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:22.436518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:22.436426  435057 retry.go:31] will retry after 624.721191ms: waiting for machine to come up
	I0408 12:47:23.063619  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:23.064268  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:23.064290  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:23.064183  435057 retry.go:31] will retry after 1.072067437s: waiting for machine to come up
	I0408 12:47:24.137999  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:24.138573  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:24.138607  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:24.138517  435057 retry.go:31] will retry after 1.238721936s: waiting for machine to come up
	I0408 12:47:25.378512  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:25.378929  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:25.378956  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:25.378819  435057 retry.go:31] will retry after 1.314708825s: waiting for machine to come up
	I0408 12:47:26.461241  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.461305  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.461321  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.482518  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.482566  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.483554  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.497035  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.497075  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.983270  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.996515  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.996556  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.483125  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.491506  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.491549  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.983839  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.991044  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.991090  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.483669  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.490665  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:28.490703  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.983248  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.998278  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:47:29.007388  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:47:29.007429  433557 api_server.go:131] duration metric: took 7.024624495s to wait for apiserver health ...
	I0408 12:47:29.007444  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:29.007452  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:29.009506  433557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:25.561601  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0408 12:47:26.087896  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:26.116559  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:26.145651  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:26.174910  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:26.206627  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:26.238398  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:26.281684  433674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:26.306417  433674 ssh_runner.go:195] Run: openssl version
	I0408 12:47:26.313279  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:26.328106  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333727  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333810  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.340200  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:26.352316  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:26.364788  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.369928  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.370003  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.376525  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:26.388232  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:26.400301  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405327  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405407  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.411586  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:26.423764  433674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:26.428995  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:26.435932  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:26.442742  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:26.451458  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:26.458715  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:26.466424  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:26.473948  433674 kubeadm.go:391] StartCluster: {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:26.474083  433674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:26.474158  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.515603  433674 cri.go:89] found id: ""
	I0408 12:47:26.515676  433674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:26.526818  433674 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:26.526845  433674 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:26.526851  433674 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:26.526908  433674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:26.537675  433674 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:26.538807  433674 kubeconfig.go:125] found "embed-certs-488947" server: "https://192.168.72.159:8443"
	I0408 12:47:26.540848  433674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:26.551278  433674 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.159
	I0408 12:47:26.551317  433674 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:26.551330  433674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:26.551406  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.591372  433674 cri.go:89] found id: ""
	I0408 12:47:26.591478  433674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:26.610486  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:26.621770  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:26.621794  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:26.621869  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:26.632480  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:26.632554  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:26.645878  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:26.659969  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:26.660068  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:26.670611  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.680945  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:26.681034  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.692201  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:26.703049  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:26.703126  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:26.715887  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:26.727464  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:26.956245  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.722655  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.973294  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.086774  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.203640  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:28.203755  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:28.704550  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.203852  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.704305  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.724333  433674 api_server.go:72] duration metric: took 1.520681062s to wait for apiserver process to appear ...
	I0408 12:47:29.724372  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:29.724402  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:29.010843  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:29.029631  433557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:29.052609  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:29.069954  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:29.070010  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:29.070022  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:29.070034  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:29.070043  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:29.070049  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:47:29.070076  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:29.070087  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:29.070098  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:47:29.070107  433557 system_pods.go:74] duration metric: took 17.469317ms to wait for pod list to return data ...
	I0408 12:47:29.070117  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:29.075401  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:29.075443  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:29.075459  433557 node_conditions.go:105] duration metric: took 5.335891ms to run NodePressure ...
	I0408 12:47:29.075489  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:29.403218  433557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409235  433557 kubeadm.go:733] kubelet initialised
	I0408 12:47:29.409263  433557 kubeadm.go:734] duration metric: took 6.014758ms waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409276  433557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:29.418787  433557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.441264  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441310  433557 pod_ready.go:81] duration metric: took 22.478832ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.441325  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441336  433557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.461805  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461916  433557 pod_ready.go:81] duration metric: took 20.564997ms for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.461945  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461982  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.475160  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475198  433557 pod_ready.go:81] duration metric: took 13.191566ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.475229  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475241  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.486266  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486306  433557 pod_ready.go:81] duration metric: took 11.046794ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.486321  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486331  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.857658  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857703  433557 pod_ready.go:81] duration metric: took 371.357848ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.857717  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857725  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.258154  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258194  433557 pod_ready.go:81] duration metric: took 400.459219ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.258208  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258230  433557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.656845  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656890  433557 pod_ready.go:81] duration metric: took 398.64565ms for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.656904  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656915  433557 pod_ready.go:38] duration metric: took 1.247627349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:30.656947  433557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:47:30.683024  433557 ops.go:34] apiserver oom_adj: -16
	I0408 12:47:30.683055  433557 kubeadm.go:591] duration metric: took 12.247545723s to restartPrimaryControlPlane
	I0408 12:47:30.683067  433557 kubeadm.go:393] duration metric: took 12.302946s to StartCluster
	I0408 12:47:30.683095  433557 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.683214  433557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:30.685507  433557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.685852  433557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:47:30.687967  433557 out.go:177] * Verifying Kubernetes components...
	I0408 12:47:30.685951  433557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:47:30.686122  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:47:30.689462  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:30.689475  433557 addons.go:69] Setting storage-provisioner=true in profile "no-preload-135234"
	I0408 12:47:30.689511  433557 addons.go:234] Setting addon storage-provisioner=true in "no-preload-135234"
	W0408 12:47:30.689521  433557 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:47:30.689555  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.689573  433557 addons.go:69] Setting default-storageclass=true in profile "no-preload-135234"
	I0408 12:47:30.689620  433557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-135234"
	I0408 12:47:30.689956  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.689995  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.689996  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690026  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.690085  433557 addons.go:69] Setting metrics-server=true in profile "no-preload-135234"
	I0408 12:47:30.690135  433557 addons.go:234] Setting addon metrics-server=true in "no-preload-135234"
	W0408 12:47:30.690146  433557 addons.go:243] addon metrics-server should already be in state true
	I0408 12:47:30.690186  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.690614  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690692  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.710746  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0408 12:47:30.710947  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0408 12:47:30.711153  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0408 12:47:30.711301  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711752  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711839  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.712010  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712027  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712564  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.712757  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712780  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712911  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712926  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.713381  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.713427  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.713660  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714094  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714304  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.714365  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.714401  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.717892  433557 addons.go:234] Setting addon default-storageclass=true in "no-preload-135234"
	W0408 12:47:30.717959  433557 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:47:30.718004  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.718497  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.718577  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.734825  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0408 12:47:30.736890  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I0408 12:47:30.756599  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.756681  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.757290  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757312  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757318  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757332  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757774  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.757849  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.758015  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.758082  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.760658  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.760732  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.762999  433557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:47:30.764689  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:47:30.764714  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:47:30.766392  433557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:30.764741  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.767890  433557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:30.767911  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:47:30.767933  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.772580  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.772714  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773015  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773038  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773423  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773449  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773462  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773663  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773875  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.773897  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.774038  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774074  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774163  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.774227  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.779694  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0408 12:47:30.780190  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.780772  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.780793  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.781114  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.781773  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.781821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.803661  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0408 12:47:30.804212  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.804828  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.804847  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.805397  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.805713  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.807761  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.808244  433557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:30.808269  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:47:30.808288  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.811598  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812078  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.812109  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812264  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.812465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.812702  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.812868  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:26.695466  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:26.835234  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:26.835265  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:26.695884  435057 retry.go:31] will retry after 1.93787314s: waiting for machine to come up
	I0408 12:47:28.635479  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:28.636019  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:28.636052  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:28.635935  435057 retry.go:31] will retry after 1.906126524s: waiting for machine to come up
	I0408 12:47:30.544699  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:30.545145  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:30.545165  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:30.545084  435057 retry.go:31] will retry after 3.291404288s: waiting for machine to come up
	I0408 12:47:30.979880  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:31.004961  433557 node_ready.go:35] waiting up to 6m0s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:31.088114  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:31.110971  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:47:31.111017  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:47:31.150193  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:47:31.150229  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:47:31.184811  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.184899  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:47:31.214364  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.244802  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:32.406228  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.318067686s)
	I0408 12:47:32.406305  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406317  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.406830  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.406897  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.406913  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406921  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.407242  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407275  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407319  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.407329  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.532524  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.318098791s)
	I0408 12:47:32.532662  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532694  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.532576  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.287674494s)
	I0408 12:47:32.532774  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532799  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533022  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533041  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533052  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533060  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533223  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533280  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533286  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533294  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533301  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533457  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533516  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533539  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533546  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.534974  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.534991  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.535019  433557 addons.go:470] Verifying addon metrics-server=true in "no-preload-135234"
	I0408 12:47:32.543151  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.543183  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.543549  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.543571  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.546033  433557 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0408 12:47:32.894282  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:32.894320  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:32.894336  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:32.988397  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:32.988442  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.224790  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.232146  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.232176  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.724683  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.729479  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.729520  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:34.224919  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:34.230233  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:47:34.247835  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:47:34.247872  433674 api_server.go:131] duration metric: took 4.523492127s to wait for apiserver health ...
	I0408 12:47:34.247883  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:34.247890  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:34.249807  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:34.251603  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:34.265254  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:34.288078  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:34.301533  433674 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:34.301570  433674 system_pods.go:61] "coredns-76f75df574-hq2mm" [cfc7bd40-0b7d-4e00-ac55-b3ae796018ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:34.301577  433674 system_pods.go:61] "etcd-embed-certs-488947" [eb29ace5-8ad9-4080-a875-2eb83dcea583] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:34.301585  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [8e97033f-996a-4b64-9474-7b4d562eb1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:34.301591  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [b3db7631-d953-418e-9c72-f299d0287a2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:34.301595  433674 system_pods.go:61] "kube-proxy-2gn8m" [c31d8f0d-d6c1-4afa-b64c-7fc422d493f2] Running
	I0408 12:47:34.301600  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b9b29f85-7a75-4b09-b6cd-940ff42326d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:34.301604  433674 system_pods.go:61] "metrics-server-57f55c9bc5-z2ztl" [d9dc47ad-3370-4e55-a724-8c529c723992] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:34.301607  433674 system_pods.go:61] "storage-provisioner" [4953dc3a-31ca-464d-9530-34f488ed9a02] Running
	I0408 12:47:34.301617  433674 system_pods.go:74] duration metric: took 13.514139ms to wait for pod list to return data ...
	I0408 12:47:34.301624  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:34.305931  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:34.305962  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:34.305974  433674 node_conditions.go:105] duration metric: took 4.345624ms to run NodePressure ...
	I0408 12:47:34.305993  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:34.598392  433674 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603606  433674 kubeadm.go:733] kubelet initialised
	I0408 12:47:34.603632  433674 kubeadm.go:734] duration metric: took 5.204237ms waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603641  433674 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:34.610027  433674 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:32.547718  433557 addons.go:505] duration metric: took 1.861769291s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0408 12:47:33.008857  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:35.510251  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:33.837729  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:33.838183  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:33.838213  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:33.838133  435057 retry.go:31] will retry after 3.949072436s: waiting for machine to come up
	I0408 12:47:39.502172  433439 start.go:364] duration metric: took 55.254308447s to acquireMachinesLock for "default-k8s-diff-port-527454"
	I0408 12:47:39.502232  433439 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:39.502245  433439 fix.go:54] fixHost starting: 
	I0408 12:47:39.502725  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:39.502767  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:39.523738  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0408 12:47:39.525022  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:39.525614  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:47:39.525646  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:39.526077  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:39.526307  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:47:39.526448  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:47:39.528207  433439 fix.go:112] recreateIfNeeded on default-k8s-diff-port-527454: state=Stopped err=<nil>
	I0408 12:47:39.528241  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	W0408 12:47:39.528449  433439 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:39.530360  433439 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-527454" ...
	I0408 12:47:36.618430  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.619713  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.009213  433557 node_ready.go:49] node "no-preload-135234" has status "Ready":"True"
	I0408 12:47:38.009241  433557 node_ready.go:38] duration metric: took 7.004239102s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:38.009250  433557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:38.014665  433557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020024  433557 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:38.020054  433557 pod_ready.go:81] duration metric: took 5.358174ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020067  433557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:40.030803  433557 pod_ready.go:102] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:37.789177  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789704  433881 main.go:141] libmachine: (old-k8s-version-384148) Found IP for machine: 192.168.39.245
	I0408 12:47:37.789740  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has current primary IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789750  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserving static IP address...
	I0408 12:47:37.790172  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.790212  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | skip adding static IP to network mk-old-k8s-version-384148 - found existing host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"}
	I0408 12:47:37.790227  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserved static IP address: 192.168.39.245
	I0408 12:47:37.790244  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting for SSH to be available...
	I0408 12:47:37.790259  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Getting to WaitForSSH function...
	I0408 12:47:37.792465  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792793  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.792829  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792884  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH client type: external
	I0408 12:47:37.792932  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa (-rw-------)
	I0408 12:47:37.792974  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:37.793007  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | About to run SSH command:
	I0408 12:47:37.793018  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | exit 0
	I0408 12:47:37.920427  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:37.920854  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:47:37.921644  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:37.924168  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924631  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.924663  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924954  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:47:37.925170  433881 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:37.925191  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:37.925526  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:37.928176  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928552  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.928583  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928740  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:37.928916  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929095  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929260  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:37.929421  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:37.929626  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:37.929637  433881 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:38.044349  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:38.044378  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044695  433881 buildroot.go:166] provisioning hostname "old-k8s-version-384148"
	I0408 12:47:38.044728  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044955  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.047788  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048116  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.048149  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048291  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.048487  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.049024  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.049242  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.049258  433881 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-384148 && echo "old-k8s-version-384148" | sudo tee /etc/hostname
	I0408 12:47:38.175102  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-384148
	
	I0408 12:47:38.175132  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.178015  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178431  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.178461  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178659  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.178872  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179057  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179198  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.179347  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.179578  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.179604  433881 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-384148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-384148/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-384148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:38.306997  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:38.307037  433881 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:38.307072  433881 buildroot.go:174] setting up certificates
	I0408 12:47:38.307088  433881 provision.go:84] configureAuth start
	I0408 12:47:38.307099  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.307464  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:38.310078  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310595  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.310643  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.313155  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313521  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.313551  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313694  433881 provision.go:143] copyHostCerts
	I0408 12:47:38.313748  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:38.313768  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:38.313829  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:38.313919  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:38.313927  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:38.313945  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:38.314007  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:38.314014  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:38.314031  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:38.314080  433881 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-384148 san=[127.0.0.1 192.168.39.245 localhost minikube old-k8s-version-384148]
	I0408 12:47:38.748791  433881 provision.go:177] copyRemoteCerts
	I0408 12:47:38.748865  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:38.748895  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.752034  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752458  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.752499  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752695  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.752900  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.753075  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.753266  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:38.849144  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:47:38.880279  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:38.907293  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:38.936116  433881 provision.go:87] duration metric: took 629.014723ms to configureAuth
	I0408 12:47:38.936152  433881 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:38.936321  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:47:38.936403  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.939013  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939399  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.939457  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939593  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.939861  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940059  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940215  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.940377  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.940622  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.940648  433881 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:39.241516  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:39.241543  433881 machine.go:97] duration metric: took 1.316359736s to provisionDockerMachine
	I0408 12:47:39.241554  433881 start.go:293] postStartSetup for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:47:39.241566  433881 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:39.241585  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.241901  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:39.241935  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.244908  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245307  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.245336  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245486  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.245692  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.245890  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.246051  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.333612  433881 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:39.338826  433881 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:39.338853  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:39.338919  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:39.338988  433881 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:39.339071  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:39.352064  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:39.380881  433881 start.go:296] duration metric: took 139.30723ms for postStartSetup
	I0408 12:47:39.380939  433881 fix.go:56] duration metric: took 20.599293118s for fixHost
	I0408 12:47:39.380970  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.384147  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384556  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.384610  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384795  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.385010  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385212  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385411  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.385627  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:39.385869  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:39.385885  433881 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:39.501982  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580459.470646239
	
	I0408 12:47:39.502031  433881 fix.go:216] guest clock: 1712580459.470646239
	I0408 12:47:39.502042  433881 fix.go:229] Guest: 2024-04-08 12:47:39.470646239 +0000 UTC Remote: 2024-04-08 12:47:39.38094595 +0000 UTC m=+307.818603739 (delta=89.700289ms)
	I0408 12:47:39.502073  433881 fix.go:200] guest clock delta is within tolerance: 89.700289ms
	I0408 12:47:39.502084  433881 start.go:83] releasing machines lock for "old-k8s-version-384148", held for 20.720472846s
	I0408 12:47:39.502114  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.502407  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:39.505864  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506319  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.506352  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506704  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507318  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507574  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507677  433881 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:39.507767  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.507908  433881 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:39.507932  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.510993  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511077  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511476  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511522  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511563  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511589  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511743  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511923  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512084  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512093  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512239  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512246  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.512413  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.633304  433881 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:39.642014  433881 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:39.804068  433881 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:39.812237  433881 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:39.812324  433881 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:39.835586  433881 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:39.835621  433881 start.go:494] detecting cgroup driver to use...
	I0408 12:47:39.835721  433881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:39.860378  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:39.882019  433881 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:39.882096  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:39.898112  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:39.913562  433881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:40.047449  433881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:40.188730  433881 docker.go:233] disabling docker service ...
	I0408 12:47:40.188822  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:40.205050  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:40.222432  433881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:40.386332  433881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:40.561583  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:40.582135  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:40.611648  433881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:47:40.611751  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.629357  433881 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:40.629458  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.646030  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.661349  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.674997  433881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:40.688255  433881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:40.706703  433881 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:40.706763  433881 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:40.724839  433881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:40.738018  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:40.906300  433881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:41.073054  433881 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:41.073141  433881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:41.078610  433881 start.go:562] Will wait 60s for crictl version
	I0408 12:47:41.078679  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:41.083133  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:41.126948  433881 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:41.127101  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.160091  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.195044  433881 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:47:41.196514  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:41.199376  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.199831  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:41.199860  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.200145  433881 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:41.204867  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:41.221274  433881 kubeadm.go:877] updating cluster {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:41.221469  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:47:41.221550  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:41.275430  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:41.275531  433881 ssh_runner.go:195] Run: which lz4
	I0408 12:47:41.280606  433881 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:41.285549  433881 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:41.285606  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:47:39.531815  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Start
	I0408 12:47:39.531988  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring networks are active...
	I0408 12:47:39.532969  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network default is active
	I0408 12:47:39.533486  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network mk-default-k8s-diff-port-527454 is active
	I0408 12:47:39.533947  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Getting domain xml...
	I0408 12:47:39.534767  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Creating domain...
	I0408 12:47:40.935150  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting to get IP...
	I0408 12:47:40.936250  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:40.936778  435248 retry.go:31] will retry after 215.442539ms: waiting for machine to come up
	I0408 12:47:41.154393  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154940  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.154852  435248 retry.go:31] will retry after 274.982374ms: waiting for machine to come up
	I0408 12:47:41.431442  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.431990  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.432023  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.431933  435248 retry.go:31] will retry after 335.077282ms: waiting for machine to come up
	I0408 12:47:40.620537  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:42.622241  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:44.118493  433674 pod_ready.go:92] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.118532  433674 pod_ready.go:81] duration metric: took 9.508474788s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.118545  433674 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626843  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.626869  433674 pod_ready.go:81] duration metric: took 508.318376ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626882  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633488  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.633521  433674 pod_ready.go:81] duration metric: took 6.630145ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633535  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027744  433557 pod_ready.go:92] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.027771  433557 pod_ready.go:81] duration metric: took 3.007695895s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027782  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034038  433557 pod_ready.go:92] pod "kube-apiserver-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.034076  433557 pod_ready.go:81] duration metric: took 6.28617ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034090  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039232  433557 pod_ready.go:92] pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.039262  433557 pod_ready.go:81] duration metric: took 5.161613ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039277  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045793  433557 pod_ready.go:92] pod "kube-proxy-tr6td" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.045887  433557 pod_ready.go:81] duration metric: took 6.600896ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045908  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.209976  433557 pod_ready.go:92] pod "kube-scheduler-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.210003  433557 pod_ready.go:81] duration metric: took 164.085848ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.210018  433557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:43.220338  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:45.718170  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:43.224219  433881 crio.go:462] duration metric: took 1.943671791s to copy over tarball
	I0408 12:47:43.224306  433881 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:41.768734  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769194  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.769131  435248 retry.go:31] will retry after 581.590127ms: waiting for machine to come up
	I0408 12:47:42.352156  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.352975  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.353017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:42.352850  435248 retry.go:31] will retry after 673.545679ms: waiting for machine to come up
	I0408 12:47:43.028329  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029066  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029101  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.028956  435248 retry.go:31] will retry after 690.795418ms: waiting for machine to come up
	I0408 12:47:43.721435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.721999  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.722025  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.721948  435248 retry.go:31] will retry after 941.917321ms: waiting for machine to come up
	I0408 12:47:44.665002  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665468  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665495  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:44.665406  435248 retry.go:31] will retry after 1.037587737s: waiting for machine to come up
	I0408 12:47:45.705319  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705792  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705822  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:45.705730  435248 retry.go:31] will retry after 1.287151334s: waiting for machine to come up
	I0408 12:47:46.640995  433674 pod_ready.go:102] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:48.558627  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.558666  433674 pod_ready.go:81] duration metric: took 3.925119514s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.558683  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583378  433674 pod_ready.go:92] pod "kube-proxy-2gn8m" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.583405  433674 pod_ready.go:81] duration metric: took 24.715384ms for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583416  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598937  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.598969  433674 pod_ready.go:81] duration metric: took 15.544342ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598983  433674 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:47.918307  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:50.219513  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:46.621677  433881 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.397321627s)
	I0408 12:47:46.881725  433881 crio.go:469] duration metric: took 3.657463869s to extract the tarball
	I0408 12:47:46.881748  433881 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:46.936087  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:46.980999  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:46.981031  433881 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:46.981086  433881 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.981115  433881 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.981160  433881 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:46.981180  433881 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.981197  433881 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.981206  433881 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.981332  433881 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.981525  433881 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.983461  433881 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983449  433881 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.983481  433881 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.983501  433881 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.983517  433881 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.983495  433881 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.215815  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.218682  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.218812  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:47:47.226057  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.237986  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.249572  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.255059  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.331367  433881 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:47:47.331429  433881 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.331484  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.403757  433881 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:47:47.403846  433881 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.403899  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.408643  433881 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:47:47.408702  433881 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:47:47.408755  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443551  433881 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:47:47.443589  433881 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:47:47.443609  433881 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.443626  433881 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.443678  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443682  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453637  433881 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:47:47.453695  433881 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.453749  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453825  433881 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:47:47.453864  433881 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.453884  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.453908  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453990  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.454014  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:47:47.456910  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.457446  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.569243  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:47:47.569295  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.569320  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.583668  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:47:47.583967  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:47:47.589545  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:47:47.589707  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:47:47.638036  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:47:47.639955  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:47:47.860567  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:48.010273  433881 cache_images.go:92] duration metric: took 1.029223281s to LoadCachedImages
	W0408 12:47:48.010419  433881 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0408 12:47:48.010440  433881 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.20.0 crio true true} ...
	I0408 12:47:48.010631  433881 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-384148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:48.010729  433881 ssh_runner.go:195] Run: crio config
	I0408 12:47:48.065431  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:47:48.065461  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:48.065478  433881 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:48.065504  433881 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-384148 NodeName:old-k8s-version-384148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:47:48.065684  433881 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-384148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:48.065779  433881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:47:48.080840  433881 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:48.080950  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:48.094581  433881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 12:47:48.117392  433881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:48.138262  433881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:47:48.165039  433881 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:48.171191  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:48.189417  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:48.341553  433881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:48.363215  433881 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148 for IP: 192.168.39.245
	I0408 12:47:48.363249  433881 certs.go:194] generating shared ca certs ...
	I0408 12:47:48.363272  433881 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:48.363473  433881 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:48.363571  433881 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:48.363589  433881 certs.go:256] generating profile certs ...
	I0408 12:47:48.426881  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key
	I0408 12:47:48.427040  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1
	I0408 12:47:48.427110  433881 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key
	I0408 12:47:48.427261  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:48.427310  433881 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:48.427321  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:48.427354  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:48.427422  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:48.427462  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:48.427523  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:48.428524  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:48.476520  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:48.522452  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:48.561710  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:48.607052  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 12:47:48.651541  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:48.704207  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:48.742684  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:48.772703  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:48.803476  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:48.833154  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:48.863183  433881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:48.885940  433881 ssh_runner.go:195] Run: openssl version
	I0408 12:47:48.894847  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:48.910969  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916386  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916449  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.923008  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:48.936122  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:48.952344  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957735  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957815  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.964720  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:48.978862  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:48.993113  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998835  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998906  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:49.005710  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:49.019197  433881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:49.024728  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:49.031831  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:49.038736  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:49.045946  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:49.053040  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:49.060064  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:49.066969  433881 kubeadm.go:391] StartCluster: {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:49.067090  433881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:49.067156  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.107266  433881 cri.go:89] found id: ""
	I0408 12:47:49.107336  433881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:49.120092  433881 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:49.120126  433881 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:49.120132  433881 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:49.120190  433881 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:49.133500  433881 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:49.134686  433881 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:49.135619  433881 kubeconfig.go:62] /home/jenkins/minikube-integration/18588-368424/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-384148" cluster setting kubeconfig missing "old-k8s-version-384148" context setting]
	I0408 12:47:49.136897  433881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:49.139048  433881 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:49.154878  433881 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.245
	I0408 12:47:49.154925  433881 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:49.154941  433881 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:49.155009  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.207364  433881 cri.go:89] found id: ""
	I0408 12:47:49.207445  433881 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:49.228390  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:49.245160  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:49.245193  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:49.245266  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:49.256832  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:49.256913  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:49.268773  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:49.282821  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:49.282898  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:49.297896  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.312075  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:49.312158  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.327398  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:49.341467  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:49.341604  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:49.354096  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:49.366717  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:49.514951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.442724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.716276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.833506  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.927655  433881 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:50.927798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:51.428588  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:46.994162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994640  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994672  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:46.994593  435248 retry.go:31] will retry after 1.863771905s: waiting for machine to come up
	I0408 12:47:48.860673  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861257  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:48.861151  435248 retry.go:31] will retry after 2.204894376s: waiting for machine to come up
	I0408 12:47:51.067423  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067909  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067937  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:51.067864  435248 retry.go:31] will retry after 2.625423179s: waiting for machine to come up
	I0408 12:47:50.608007  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:53.108084  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:52.717545  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:55.218944  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:51.928035  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.427844  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.928718  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.927869  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.428707  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.928798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.427884  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.928273  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:56.427941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.695295  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695826  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695862  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:53.695772  435248 retry.go:31] will retry after 4.111917473s: waiting for machine to come up
	I0408 12:47:55.606909  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:58.111708  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:57.717559  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:59.718066  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:56.927927  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.428068  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.928800  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.427871  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.927822  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.428740  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.927924  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.427948  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.928792  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:01.428657  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.809179  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809697  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809729  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:57.809632  435248 retry.go:31] will retry after 4.27502806s: waiting for machine to come up
	I0408 12:48:02.086033  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086558  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has current primary IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086586  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Found IP for machine: 192.168.50.7
	I0408 12:48:02.086603  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserving static IP address...
	I0408 12:48:02.087069  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.087105  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserved static IP address: 192.168.50.7
	I0408 12:48:02.087137  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | skip adding static IP to network mk-default-k8s-diff-port-527454 - found existing host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"}
	I0408 12:48:02.087158  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Getting to WaitForSSH function...
	I0408 12:48:02.087177  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for SSH to be available...
	I0408 12:48:02.089228  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089585  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.089608  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH client type: external
	I0408 12:48:02.089840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa (-rw-------)
	I0408 12:48:02.089885  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:48:02.089900  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | About to run SSH command:
	I0408 12:48:02.089917  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | exit 0
	I0408 12:48:02.216245  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | SSH cmd err, output: <nil>: 
	I0408 12:48:02.216684  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetConfigRaw
	I0408 12:48:02.217582  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.220543  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.220961  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.220995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.221282  433439 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/config.json ...
	I0408 12:48:02.221480  433439 machine.go:94] provisionDockerMachine start ...
	I0408 12:48:02.221499  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:02.221738  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.224371  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.224770  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.224802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.225007  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.225236  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225399  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225548  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.225740  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.225957  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.225970  433439 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:48:02.336716  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:48:02.336754  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337074  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:48:02.337108  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337351  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.340133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340539  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.340583  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340653  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.340842  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341016  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341171  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.341346  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.341539  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.341556  433439 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-527454 && echo "default-k8s-diff-port-527454" | sudo tee /etc/hostname
	I0408 12:48:02.464462  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-527454
	
	I0408 12:48:02.464507  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.467682  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468082  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.468118  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468335  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.468595  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468782  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468954  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.469154  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.469372  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.469392  433439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-527454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-527454/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-527454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:48:02.593971  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:48:02.594006  433439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:48:02.594061  433439 buildroot.go:174] setting up certificates
	I0408 12:48:02.594078  433439 provision.go:84] configureAuth start
	I0408 12:48:02.594092  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.594431  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.597587  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.598043  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.600898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601267  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.601299  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601497  433439 provision.go:143] copyHostCerts
	I0408 12:48:02.601562  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:48:02.601588  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:48:02.601653  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:48:02.601841  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:48:02.601857  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:48:02.601888  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:48:02.601966  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:48:02.601981  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:48:02.602010  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:48:02.602088  433439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-527454 san=[127.0.0.1 192.168.50.7 default-k8s-diff-port-527454 localhost minikube]
	I0408 12:48:02.845116  433439 provision.go:177] copyRemoteCerts
	I0408 12:48:02.845190  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:48:02.845217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.848054  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.848406  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848559  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.848817  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.848986  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.849125  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:02.934223  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:48:02.962726  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0408 12:48:02.992767  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:48:03.021973  433439 provision.go:87] duration metric: took 427.87874ms to configureAuth
	I0408 12:48:03.022009  433439 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:48:03.022270  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:48:03.022382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.025407  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025765  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.025802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025959  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.026215  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026379  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026510  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.026659  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.026834  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.026856  433439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:48:03.310263  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:48:03.310307  433439 machine.go:97] duration metric: took 1.088813603s to provisionDockerMachine
	I0408 12:48:03.310323  433439 start.go:293] postStartSetup for "default-k8s-diff-port-527454" (driver="kvm2")
	I0408 12:48:03.310337  433439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:48:03.310362  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.310758  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:48:03.310799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.313533  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.313968  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.314001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.314201  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.314375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.314584  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.314760  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.400087  433439 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:48:03.405240  433439 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:48:03.405272  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:48:03.405351  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:48:03.405450  433439 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:48:03.405570  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:48:03.415947  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:03.448935  433439 start.go:296] duration metric: took 138.593583ms for postStartSetup
	I0408 12:48:03.449025  433439 fix.go:56] duration metric: took 23.946779964s for fixHost
	I0408 12:48:03.449055  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.452026  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452392  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.452435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452630  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.452844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453063  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453248  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.453420  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.453604  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.453615  433439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:48:03.565710  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580483.551031252
	
	I0408 12:48:03.565738  433439 fix.go:216] guest clock: 1712580483.551031252
	I0408 12:48:03.565750  433439 fix.go:229] Guest: 2024-04-08 12:48:03.551031252 +0000 UTC Remote: 2024-04-08 12:48:03.44903588 +0000 UTC m=+361.760256784 (delta=101.995372ms)
	I0408 12:48:03.565777  433439 fix.go:200] guest clock delta is within tolerance: 101.995372ms
	I0408 12:48:03.565787  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 24.063582343s
	I0408 12:48:03.565806  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.566106  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:03.569409  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.569776  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.569814  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.570017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570577  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570831  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570952  433439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:48:03.571021  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.571121  433439 ssh_runner.go:195] Run: cat /version.json
	I0408 12:48:03.571146  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.573939  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574167  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574300  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574333  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574469  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574594  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574621  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574674  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.574757  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574871  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.574957  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.575130  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.575441  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.575590  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.695930  433439 ssh_runner.go:195] Run: systemctl --version
	I0408 12:48:03.702915  433439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:48:03.853737  433439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:48:03.860218  433439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:48:03.860287  433439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:48:03.877827  433439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:48:03.877861  433439 start.go:494] detecting cgroup driver to use...
	I0408 12:48:03.877943  433439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:48:03.897232  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:48:03.913028  433439 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:48:03.913112  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:48:03.929574  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:48:03.946880  433439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:48:04.083524  433439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:48:04.243842  433439 docker.go:233] disabling docker service ...
	I0408 12:48:04.243938  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:48:04.260459  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:48:04.276119  433439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:48:04.428999  433439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:48:04.571431  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:48:04.589661  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:48:04.612872  433439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:48:04.612954  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.625841  433439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:48:04.625939  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.638868  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.652106  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.664883  433439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:48:04.678149  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.691069  433439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.711329  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.725917  433439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:48:04.738875  433439 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:48:04.738941  433439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:48:04.756784  433439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:48:04.769852  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:04.895658  433439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:48:05.056165  433439 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:48:05.056270  433439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:48:05.061838  433439 start.go:562] Will wait 60s for crictl version
	I0408 12:48:05.061918  433439 ssh_runner.go:195] Run: which crictl
	I0408 12:48:05.066280  433439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:48:05.110966  433439 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:48:05.111084  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.142272  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.176138  433439 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:48:00.606508  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:03.107018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:05.109926  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:02.220836  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:04.718465  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:01.928628  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.427857  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.927917  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.428824  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.428084  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.928751  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.428193  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.927854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.427836  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.177382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:05.180028  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180334  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:05.180363  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180635  433439 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 12:48:05.185436  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:05.199001  433439 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:48:05.199130  433439 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:48:05.199174  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:05.239255  433439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:48:05.239358  433439 ssh_runner.go:195] Run: which lz4
	I0408 12:48:05.244115  433439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:48:05.249135  433439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:48:05.249169  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:48:07.606284  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.607161  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.720025  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.219059  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.928222  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.427868  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.927863  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.428510  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.928662  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.427932  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.928613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.928934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.428085  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.889921  433439 crio.go:462] duration metric: took 1.645848876s to copy over tarball
	I0408 12:48:06.890006  433439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:48:09.403589  433439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513555281s)
	I0408 12:48:09.403620  433439 crio.go:469] duration metric: took 2.513669951s to extract the tarball
	I0408 12:48:09.403627  433439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:48:09.446487  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:09.494576  433439 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:48:09.494606  433439 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:48:09.494614  433439 kubeadm.go:928] updating node { 192.168.50.7 8444 v1.29.3 crio true true} ...
	I0408 12:48:09.494822  433439 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-527454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:48:09.494917  433439 ssh_runner.go:195] Run: crio config
	I0408 12:48:09.541809  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:09.541839  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:09.541859  433439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:48:09.541887  433439 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-527454 NodeName:default-k8s-diff-port-527454 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:48:09.542105  433439 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-527454"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:48:09.542201  433439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:48:09.553494  433439 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:48:09.553591  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:48:09.564970  433439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0408 12:48:09.584888  433439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:48:09.604538  433439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0408 12:48:09.623993  433439 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0408 12:48:09.628368  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:09.642170  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:09.789791  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:48:09.808943  433439 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454 for IP: 192.168.50.7
	I0408 12:48:09.808972  433439 certs.go:194] generating shared ca certs ...
	I0408 12:48:09.808995  433439 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:48:09.809194  433439 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:48:09.809242  433439 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:48:09.809253  433439 certs.go:256] generating profile certs ...
	I0408 12:48:09.809344  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/client.key
	I0408 12:48:09.809415  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key.ad1d04eb
	I0408 12:48:09.809457  433439 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key
	I0408 12:48:09.809645  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:48:09.809699  433439 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:48:09.809713  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:48:09.809742  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:48:09.809764  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:48:09.809792  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:48:09.809851  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:09.810516  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:48:09.866085  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:48:09.899718  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:48:09.941704  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:48:09.976180  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0408 12:48:10.014420  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:48:10.044380  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:48:10.072034  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:48:10.099417  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:48:10.126143  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:48:10.154244  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:48:10.183954  433439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:48:10.207277  433439 ssh_runner.go:195] Run: openssl version
	I0408 12:48:10.213691  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:48:10.228406  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233736  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233798  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.240236  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:48:10.253382  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:48:10.267783  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273234  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273318  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.279925  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:48:10.292710  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:48:10.305381  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310629  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310703  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.317063  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:48:10.330320  433439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:48:10.336138  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:48:10.343341  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:48:10.350536  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:48:10.357665  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:48:10.364925  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:48:10.372314  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:48:10.380001  433439 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:48:10.380107  433439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:48:10.380174  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.425378  433439 cri.go:89] found id: ""
	I0408 12:48:10.425475  433439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:48:10.438972  433439 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:48:10.439000  433439 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:48:10.439005  433439 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:48:10.439051  433439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:48:10.452072  433439 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:48:10.453410  433439 kubeconfig.go:125] found "default-k8s-diff-port-527454" server: "https://192.168.50.7:8444"
	I0408 12:48:10.456022  433439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:48:10.469116  433439 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0408 12:48:10.469171  433439 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:48:10.469188  433439 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:48:10.469256  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.517874  433439 cri.go:89] found id: ""
	I0408 12:48:10.517969  433439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:48:10.538088  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:48:10.551560  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:48:10.551580  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:48:10.551636  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:48:10.564123  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:48:10.564209  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:48:10.578691  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:48:10.590692  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:48:10.590765  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:48:10.602902  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.616831  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:48:10.616922  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.629213  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:48:10.641625  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:48:10.641709  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:48:10.653162  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:48:10.665261  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:10.811712  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.107002  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.606976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:12.188805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.221750  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:11.928656  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.427975  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.927923  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.428494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.928608  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.427852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.927874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.427855  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:16.427929  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.901885  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.09013292s)
	I0408 12:48:11.975836  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.237051  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.329550  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.460345  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:48:12.460457  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.961443  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.460681  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.520828  433439 api_server.go:72] duration metric: took 1.060470201s to wait for apiserver process to appear ...
	I0408 12:48:13.520866  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:48:13.520899  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:13.521407  433439 api_server.go:269] stopped: https://192.168.50.7:8444/healthz: Get "https://192.168.50.7:8444/healthz": dial tcp 192.168.50.7:8444: connect: connection refused
	I0408 12:48:14.022007  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.564485  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.564526  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:16.564543  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.617870  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.617904  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:17.021290  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.026545  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.026578  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:17.521124  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.529552  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.529596  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:18.021125  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:18.037000  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:48:18.049656  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:48:18.049699  433439 api_server.go:131] duration metric: took 4.528823991s to wait for apiserver health ...
	I0408 12:48:18.049722  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:18.049730  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:18.051495  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:48:16.607222  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:18.607837  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.717612  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:19.217050  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.928269  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.427867  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.428658  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.928649  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.428746  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.928734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.427874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.927842  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:21.427823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.052916  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:48:18.072115  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:48:18.111408  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:48:18.130585  433439 system_pods.go:59] 8 kube-system pods found
	I0408 12:48:18.130629  433439 system_pods.go:61] "coredns-76f75df574-r99kj" [171e271b-eec6-4238-afb1-82a2f228c225] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:48:18.130641  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [7019f1eb-58ef-4b1f-acf3-ed3c1ed84623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:48:18.130651  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [80ccd16d-d883-4c92-bb13-abe2d412532c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:48:18.130661  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [78d513aa-1f24-42c0-bfb9-4c20fdee63f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:48:18.130669  433439 system_pods.go:61] "kube-proxy-ztmmc" [de09a26e-cd95-401a-b575-977fcd660c47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 12:48:18.130683  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [eac4d549-1763-45b8-be11-b3b9e83f5110] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:48:18.130702  433439 system_pods.go:61] "metrics-server-57f55c9bc5-44qbm" [52631fc6-84d0-443b-ba42-de35a65db0f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:48:18.130714  433439 system_pods.go:61] "storage-provisioner" [82e8b0d0-6c22-4644-8bd1-b48887b0fe82] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 12:48:18.130730  433439 system_pods.go:74] duration metric: took 19.293309ms to wait for pod list to return data ...
	I0408 12:48:18.130745  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:48:18.135625  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:48:18.135663  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:48:18.135679  433439 node_conditions.go:105] duration metric: took 4.924641ms to run NodePressure ...
	I0408 12:48:18.135724  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:18.416272  433439 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424302  433439 kubeadm.go:733] kubelet initialised
	I0408 12:48:18.424325  433439 kubeadm.go:734] duration metric: took 8.015642ms waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424342  433439 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:48:18.436706  433439 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.447063  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447102  433439 pod_ready.go:81] duration metric: took 10.361708ms for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.447116  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447126  433439 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.460464  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460496  433439 pod_ready.go:81] duration metric: took 13.357612ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.460513  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460523  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.469991  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470035  433439 pod_ready.go:81] duration metric: took 9.502493ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.470072  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470083  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.516886  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516920  433439 pod_ready.go:81] duration metric: took 46.823396ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.516933  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516940  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915101  433439 pod_ready.go:92] pod "kube-proxy-ztmmc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:18.915131  433439 pod_ready.go:81] duration metric: took 398.182437ms for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915144  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:20.922456  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.107083  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.108249  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.219995  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.718091  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.928654  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.428887  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.928103  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.428482  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.928236  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.428613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.928054  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.428566  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.927852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.428729  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.922607  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:24.922155  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:24.922185  433439 pod_ready.go:81] duration metric: took 6.007031338s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:24.922200  433439 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:25.607653  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.216429  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.218553  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.717516  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.427853  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.928281  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.428354  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.928419  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.427934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.427840  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.931412  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:29.430930  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.608369  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:33.107424  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:32.717551  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.216256  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:31.928618  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.928067  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.428776  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.928583  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.428774  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.928033  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.428825  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.928696  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.428311  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.931958  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:34.430950  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.607018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.607820  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:40.106361  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.217721  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:39.218016  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:36.928915  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.427831  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.928429  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.428001  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.927802  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.427845  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.928013  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:41.428569  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.929987  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:38.931900  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.429986  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:42.605609  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:44.606744  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.717196  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:43.718405  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.428794  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.927856  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.428217  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.928796  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.428756  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.927829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.428563  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.927812  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:46.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.430411  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:45.932993  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.607058  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.607716  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.216568  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.218325  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.718153  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.928607  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.427829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.928499  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.428241  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.928393  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.428488  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.927941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.428003  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.928815  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:50.928888  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:50.970680  433881 cri.go:89] found id: ""
	I0408 12:48:50.970713  433881 logs.go:276] 0 containers: []
	W0408 12:48:50.970725  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:50.970733  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:50.970799  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:51.009804  433881 cri.go:89] found id: ""
	I0408 12:48:51.009838  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.009848  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:51.009854  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:51.009909  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:51.049581  433881 cri.go:89] found id: ""
	I0408 12:48:51.049617  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.049626  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:51.049633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:51.049706  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:51.086286  433881 cri.go:89] found id: ""
	I0408 12:48:51.086314  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.086323  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:51.086329  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:51.086395  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:51.126888  433881 cri.go:89] found id: ""
	I0408 12:48:51.126916  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.126927  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:51.126935  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:51.126998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:51.168650  433881 cri.go:89] found id: ""
	I0408 12:48:51.168684  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.168695  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:51.168702  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:51.168759  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:51.205661  433881 cri.go:89] found id: ""
	I0408 12:48:51.205693  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.205706  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:51.205714  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:51.205782  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:51.245659  433881 cri.go:89] found id: ""
	I0408 12:48:51.245699  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.245711  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:51.245725  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:51.245742  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:51.310079  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:51.310120  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:51.354093  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:51.354124  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:51.405031  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:51.405074  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:51.421147  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:51.421183  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:51.547658  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:48.430488  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.432250  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:51.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.606447  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.217434  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:55.717265  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.047880  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:54.062872  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:54.062960  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:54.109041  433881 cri.go:89] found id: ""
	I0408 12:48:54.109068  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.109079  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:54.109087  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:54.109209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:54.150194  433881 cri.go:89] found id: ""
	I0408 12:48:54.150223  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.150231  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:54.150237  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:54.150292  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:54.191735  433881 cri.go:89] found id: ""
	I0408 12:48:54.191767  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.191785  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:54.191792  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:54.191872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:54.251766  433881 cri.go:89] found id: ""
	I0408 12:48:54.251798  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.251807  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:54.251813  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:54.251878  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:54.292179  433881 cri.go:89] found id: ""
	I0408 12:48:54.292215  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.292229  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:54.292237  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:54.292311  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:54.329338  433881 cri.go:89] found id: ""
	I0408 12:48:54.329368  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.329380  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:54.329389  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:54.329458  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:54.377094  433881 cri.go:89] found id: ""
	I0408 12:48:54.377132  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.377144  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:54.377153  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:54.377227  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:54.415835  433881 cri.go:89] found id: ""
	I0408 12:48:54.415865  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.415873  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:54.415884  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:54.415896  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:54.471985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:54.472040  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:54.487674  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:54.487727  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:54.575138  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:54.575161  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:54.575176  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:54.647315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:54.647364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:52.928902  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.931253  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:56.106505  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.108187  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.218754  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.718600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:57.189969  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:57.204122  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:57.204201  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:57.241210  433881 cri.go:89] found id: ""
	I0408 12:48:57.241243  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.241252  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:57.241258  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:57.241310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:57.279553  433881 cri.go:89] found id: ""
	I0408 12:48:57.279591  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.279600  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:57.279606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:57.279658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:57.323516  433881 cri.go:89] found id: ""
	I0408 12:48:57.323560  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.323585  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:57.323593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:57.323663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:57.363723  433881 cri.go:89] found id: ""
	I0408 12:48:57.363755  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.363766  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:57.363772  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:57.363839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:57.400144  433881 cri.go:89] found id: ""
	I0408 12:48:57.400178  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.400190  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:57.400208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:57.400274  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:57.441875  433881 cri.go:89] found id: ""
	I0408 12:48:57.441907  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.441919  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:57.441928  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:57.441999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:57.478024  433881 cri.go:89] found id: ""
	I0408 12:48:57.478057  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.478066  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:57.478074  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:57.478144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:57.516602  433881 cri.go:89] found id: ""
	I0408 12:48:57.516633  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.516642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:57.516652  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:57.516666  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:57.573832  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:57.573883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:57.590751  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:57.590793  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:57.670650  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:57.670679  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:57.670698  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:57.746440  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:57.746488  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:00.291359  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:00.306024  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:00.306116  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:00.352262  433881 cri.go:89] found id: ""
	I0408 12:49:00.352294  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.352305  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:00.352314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:00.352390  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:00.392371  433881 cri.go:89] found id: ""
	I0408 12:49:00.392403  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.392415  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:00.392423  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:00.392488  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:00.434848  433881 cri.go:89] found id: ""
	I0408 12:49:00.434876  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.434885  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:00.434892  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:00.434951  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:00.476998  433881 cri.go:89] found id: ""
	I0408 12:49:00.477032  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.477045  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:00.477054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:00.477128  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:00.514520  433881 cri.go:89] found id: ""
	I0408 12:49:00.514560  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.514569  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:00.514575  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:00.514643  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:00.555942  433881 cri.go:89] found id: ""
	I0408 12:49:00.555981  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.555996  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:00.556005  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:00.556074  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:00.603944  433881 cri.go:89] found id: ""
	I0408 12:49:00.604053  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.604079  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:00.604097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:00.604193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:00.660591  433881 cri.go:89] found id: ""
	I0408 12:49:00.660628  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.660642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:00.660655  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:00.660677  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:00.731774  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:00.731821  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:00.747891  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:00.747947  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:00.827051  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:00.827085  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:00.827100  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:00.907231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:00.907280  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:57.431032  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:59.930470  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.608450  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.106647  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.218064  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.460014  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:03.474615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:03.474716  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:03.513072  433881 cri.go:89] found id: ""
	I0408 12:49:03.513106  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.513115  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:03.513122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:03.513179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:03.549307  433881 cri.go:89] found id: ""
	I0408 12:49:03.549349  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.549358  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:03.549364  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:03.549508  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:03.587463  433881 cri.go:89] found id: ""
	I0408 12:49:03.587503  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.587516  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:03.587524  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:03.587601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:03.628171  433881 cri.go:89] found id: ""
	I0408 12:49:03.628202  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.628211  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:03.628217  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:03.628284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:03.663630  433881 cri.go:89] found id: ""
	I0408 12:49:03.663661  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.663672  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:03.663680  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:03.663762  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:03.704078  433881 cri.go:89] found id: ""
	I0408 12:49:03.704112  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.704124  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:03.704134  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:03.704202  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:03.744820  433881 cri.go:89] found id: ""
	I0408 12:49:03.744856  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.744868  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:03.744877  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:03.744945  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:03.785826  433881 cri.go:89] found id: ""
	I0408 12:49:03.785855  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.785868  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:03.785878  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:03.785905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:03.800987  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:03.801019  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:03.882870  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:03.882905  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:03.882924  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:03.967335  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:03.967382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:04.008319  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:04.008348  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:06.562156  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:06.579058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:06.579137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:01.933210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:04.428894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.428974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.606895  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:08.106819  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:07.718023  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.635302  433881 cri.go:89] found id: ""
	I0408 12:49:06.635333  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.635345  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:06.635353  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:06.635422  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:06.696626  433881 cri.go:89] found id: ""
	I0408 12:49:06.696675  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.696692  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:06.696700  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:06.696769  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:06.738555  433881 cri.go:89] found id: ""
	I0408 12:49:06.738589  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.738601  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:06.738610  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:06.738675  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:06.780471  433881 cri.go:89] found id: ""
	I0408 12:49:06.780507  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.780516  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:06.780522  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:06.780587  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:06.823514  433881 cri.go:89] found id: ""
	I0408 12:49:06.823558  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.823571  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:06.823580  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:06.823671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:06.863990  433881 cri.go:89] found id: ""
	I0408 12:49:06.864029  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.864045  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:06.864055  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:06.864123  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:06.905383  433881 cri.go:89] found id: ""
	I0408 12:49:06.905419  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.905432  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:06.905440  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:06.905510  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:06.947761  433881 cri.go:89] found id: ""
	I0408 12:49:06.947792  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.947805  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:06.947814  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:06.947826  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:06.988895  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:06.988930  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:07.043205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:07.043251  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:07.057788  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:07.057823  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:07.137854  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:07.137884  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:07.137903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:09.724678  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:09.739337  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:09.739408  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:09.777803  433881 cri.go:89] found id: ""
	I0408 12:49:09.777837  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.777848  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:09.777857  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:09.777934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:09.818101  433881 cri.go:89] found id: ""
	I0408 12:49:09.818132  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.818144  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:09.818152  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:09.818220  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:09.860148  433881 cri.go:89] found id: ""
	I0408 12:49:09.860186  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.860211  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:09.860218  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:09.860284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:09.899008  433881 cri.go:89] found id: ""
	I0408 12:49:09.899042  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.899054  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:09.899063  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:09.899130  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:09.938235  433881 cri.go:89] found id: ""
	I0408 12:49:09.938270  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.938281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:09.938290  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:09.938361  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:09.977404  433881 cri.go:89] found id: ""
	I0408 12:49:09.977438  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.977447  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:09.977454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:09.977505  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:10.015959  433881 cri.go:89] found id: ""
	I0408 12:49:10.015992  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.016008  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:10.016015  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:10.016083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:10.055723  433881 cri.go:89] found id: ""
	I0408 12:49:10.055753  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.055762  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:10.055771  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:10.055785  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:10.131028  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:10.131061  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:10.131079  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:10.213484  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:10.213528  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:10.261403  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:10.261554  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:10.316130  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:10.316189  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:08.429894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.930925  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.609607  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:13.106296  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.716182  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.717779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.832344  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:12.846324  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:12.846446  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:12.883721  433881 cri.go:89] found id: ""
	I0408 12:49:12.883761  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.883776  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:12.883784  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:12.883850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:12.922869  433881 cri.go:89] found id: ""
	I0408 12:49:12.922903  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.922914  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:12.922923  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:12.922989  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:12.965672  433881 cri.go:89] found id: ""
	I0408 12:49:12.965711  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.965723  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:12.965731  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:12.965804  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:13.005430  433881 cri.go:89] found id: ""
	I0408 12:49:13.005466  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.005479  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:13.005494  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:13.005556  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:13.047068  433881 cri.go:89] found id: ""
	I0408 12:49:13.047095  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.047103  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:13.047110  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:13.047175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:13.085014  433881 cri.go:89] found id: ""
	I0408 12:49:13.085047  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.085058  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:13.085067  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:13.085134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:13.122582  433881 cri.go:89] found id: ""
	I0408 12:49:13.122621  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.122633  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:13.122643  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:13.122707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:13.159159  433881 cri.go:89] found id: ""
	I0408 12:49:13.159190  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.159199  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:13.159209  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:13.159221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:13.211508  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:13.211553  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:13.228228  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:13.228265  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:13.306379  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:13.306419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:13.306437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:13.383403  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:13.383462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:15.933673  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:15.947963  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:15.948039  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:15.988497  433881 cri.go:89] found id: ""
	I0408 12:49:15.988526  433881 logs.go:276] 0 containers: []
	W0408 12:49:15.988534  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:15.988541  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:15.988605  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:16.026695  433881 cri.go:89] found id: ""
	I0408 12:49:16.026733  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.026758  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:16.026766  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:16.026850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:16.072415  433881 cri.go:89] found id: ""
	I0408 12:49:16.072452  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.072487  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:16.072498  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:16.072576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:16.111534  433881 cri.go:89] found id: ""
	I0408 12:49:16.111563  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.111575  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:16.111583  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:16.111653  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:16.151515  433881 cri.go:89] found id: ""
	I0408 12:49:16.151550  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.151562  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:16.151572  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:16.151640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:16.189055  433881 cri.go:89] found id: ""
	I0408 12:49:16.189085  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.189094  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:16.189101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:16.189153  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:16.226759  433881 cri.go:89] found id: ""
	I0408 12:49:16.226790  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.226800  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:16.226807  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:16.226860  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:16.269035  433881 cri.go:89] found id: ""
	I0408 12:49:16.269068  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.269079  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:16.269092  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:16.269110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:16.322426  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:16.322472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:16.337670  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:16.337704  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:16.422746  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:16.422777  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:16.422795  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:16.508089  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:16.508140  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:12.931911  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.933011  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:15.607174  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:18.106346  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:17.216822  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.216874  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.055162  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:19.069970  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:19.070044  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:19.110031  433881 cri.go:89] found id: ""
	I0408 12:49:19.110062  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.110070  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:19.110077  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:19.110125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:19.150644  433881 cri.go:89] found id: ""
	I0408 12:49:19.150681  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.150693  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:19.150702  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:19.150770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:19.193032  433881 cri.go:89] found id: ""
	I0408 12:49:19.193064  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.193076  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:19.193084  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:19.193157  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:19.230634  433881 cri.go:89] found id: ""
	I0408 12:49:19.230661  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.230670  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:19.230676  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:19.230727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:19.269083  433881 cri.go:89] found id: ""
	I0408 12:49:19.269116  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.269125  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:19.269132  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:19.269183  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:19.309072  433881 cri.go:89] found id: ""
	I0408 12:49:19.309105  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.309117  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:19.309126  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:19.309208  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:19.349582  433881 cri.go:89] found id: ""
	I0408 12:49:19.349613  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.349622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:19.349633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:19.349687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:19.388015  433881 cri.go:89] found id: ""
	I0408 12:49:19.388046  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.388053  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:19.388062  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:19.388076  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:19.469726  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:19.469750  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:19.469766  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:19.551098  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:19.551138  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:19.595343  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:19.595377  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:19.655983  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:19.656031  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:17.429653  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.432135  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:20.609415  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.105576  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:25.106666  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:21.217932  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.720613  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:22.172109  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:22.187123  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:22.187197  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:22.227242  433881 cri.go:89] found id: ""
	I0408 12:49:22.227269  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.227277  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:22.227283  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:22.227344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:22.266238  433881 cri.go:89] found id: ""
	I0408 12:49:22.266270  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.266279  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:22.266285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:22.266345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:22.304245  433881 cri.go:89] found id: ""
	I0408 12:49:22.304273  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.304281  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:22.304288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:22.304344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:22.348994  433881 cri.go:89] found id: ""
	I0408 12:49:22.349035  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.349048  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:22.349058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:22.349134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:22.389590  433881 cri.go:89] found id: ""
	I0408 12:49:22.389622  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.389631  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:22.389638  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:22.389708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:22.425775  433881 cri.go:89] found id: ""
	I0408 12:49:22.425809  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.425821  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:22.425830  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:22.425898  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:22.468155  433881 cri.go:89] found id: ""
	I0408 12:49:22.468184  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.468192  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:22.468198  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:22.468250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:22.507866  433881 cri.go:89] found id: ""
	I0408 12:49:22.507906  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.507915  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:22.507934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:22.507953  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:22.559847  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:22.559893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:22.575153  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:22.575188  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:22.656324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:22.656354  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:22.656372  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:22.737542  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:22.737589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.282655  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:25.296701  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:25.296770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:25.337101  433881 cri.go:89] found id: ""
	I0408 12:49:25.337141  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.337152  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:25.337161  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:25.337228  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:25.376383  433881 cri.go:89] found id: ""
	I0408 12:49:25.376453  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.376467  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:25.376481  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:25.376576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:25.415819  433881 cri.go:89] found id: ""
	I0408 12:49:25.415852  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.415865  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:25.415873  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:25.415941  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:25.457500  433881 cri.go:89] found id: ""
	I0408 12:49:25.457549  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.457560  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:25.457568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:25.457652  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:25.497132  433881 cri.go:89] found id: ""
	I0408 12:49:25.497172  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.497185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:25.497194  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:25.497265  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:25.542721  433881 cri.go:89] found id: ""
	I0408 12:49:25.542754  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.542765  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:25.542773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:25.542842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:25.583815  433881 cri.go:89] found id: ""
	I0408 12:49:25.583858  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.583869  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:25.583876  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:25.583931  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:25.623484  433881 cri.go:89] found id: ""
	I0408 12:49:25.623519  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.623530  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:25.623544  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:25.623562  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.674250  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:25.674286  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:25.735433  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:25.735477  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:25.750760  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:25.750792  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:25.830122  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:25.830158  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:25.830192  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:21.929027  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.933879  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.429452  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:27.106798  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:29.605690  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.216525  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.216788  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.217600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.418059  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:28.434568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:28.434627  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.479914  433881 cri.go:89] found id: ""
	I0408 12:49:28.479956  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.479968  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:28.479977  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:28.480052  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:28.526249  433881 cri.go:89] found id: ""
	I0408 12:49:28.526282  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.526305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:28.526314  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:28.526403  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:28.564561  433881 cri.go:89] found id: ""
	I0408 12:49:28.564595  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.564606  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:28.564613  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:28.564666  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:28.606416  433881 cri.go:89] found id: ""
	I0408 12:49:28.606456  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.606469  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:28.606478  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:28.606545  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:28.649847  433881 cri.go:89] found id: ""
	I0408 12:49:28.649880  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.649915  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:28.649925  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:28.650014  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:28.690548  433881 cri.go:89] found id: ""
	I0408 12:49:28.690587  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.690600  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:28.690609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:28.690681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:28.730123  433881 cri.go:89] found id: ""
	I0408 12:49:28.730159  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.730170  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:28.730179  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:28.730250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:28.771147  433881 cri.go:89] found id: ""
	I0408 12:49:28.771192  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.771205  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:28.771220  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:28.771238  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:28.856250  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:28.856273  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:28.856301  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:28.941925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:28.941982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:29.003853  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:29.003893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:29.057957  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:29.058004  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.573734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:31.588485  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:31.588551  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.433974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.930607  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.606729  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.107220  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:32.218719  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.718165  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.625072  433881 cri.go:89] found id: ""
	I0408 12:49:31.625100  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.625108  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:31.625114  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:31.625175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:31.662716  433881 cri.go:89] found id: ""
	I0408 12:49:31.662752  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.662763  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:31.662772  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:31.662839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:31.701551  433881 cri.go:89] found id: ""
	I0408 12:49:31.701588  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.701596  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:31.701603  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:31.701687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:31.741857  433881 cri.go:89] found id: ""
	I0408 12:49:31.741888  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.741900  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:31.741908  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:31.741973  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:31.782209  433881 cri.go:89] found id: ""
	I0408 12:49:31.782240  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.782252  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:31.782259  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:31.782347  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:31.820207  433881 cri.go:89] found id: ""
	I0408 12:49:31.820261  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.820283  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:31.820297  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:31.820362  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:31.858445  433881 cri.go:89] found id: ""
	I0408 12:49:31.858482  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.858495  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:31.858504  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:31.858580  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:31.899017  433881 cri.go:89] found id: ""
	I0408 12:49:31.899052  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.899070  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:31.899084  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:31.899102  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:31.956200  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:31.956239  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.971940  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:31.971982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:32.049548  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:32.049578  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:32.049596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:32.136320  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:32.136366  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:34.684997  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:34.700097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:34.700185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:34.757577  433881 cri.go:89] found id: ""
	I0408 12:49:34.757669  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.757686  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:34.757696  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:34.757792  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:34.798151  433881 cri.go:89] found id: ""
	I0408 12:49:34.798188  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.798196  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:34.798203  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:34.798266  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:34.835735  433881 cri.go:89] found id: ""
	I0408 12:49:34.835774  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.835786  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:34.835794  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:34.835862  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:34.875311  433881 cri.go:89] found id: ""
	I0408 12:49:34.875345  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.875359  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:34.875368  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:34.875484  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:34.916118  433881 cri.go:89] found id: ""
	I0408 12:49:34.916148  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.916159  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:34.916167  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:34.916233  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:34.961197  433881 cri.go:89] found id: ""
	I0408 12:49:34.961234  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.961246  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:34.961254  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:34.961314  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:34.999553  433881 cri.go:89] found id: ""
	I0408 12:49:34.999590  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.999598  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:34.999606  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:34.999671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:35.038204  433881 cri.go:89] found id: ""
	I0408 12:49:35.038244  433881 logs.go:276] 0 containers: []
	W0408 12:49:35.038254  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:35.038265  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:35.038277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:35.118925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:35.118982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:35.164584  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:35.164631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:35.216654  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:35.216694  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:35.232506  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:35.232544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:35.304615  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:33.429854  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:35.933211  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:36.605433  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:38.606014  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.217818  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:39.717250  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.805529  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:37.821463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:37.821550  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:37.860644  433881 cri.go:89] found id: ""
	I0408 12:49:37.860683  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.860700  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:37.860709  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:37.860781  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:37.899995  433881 cri.go:89] found id: ""
	I0408 12:49:37.900034  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.900042  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:37.900048  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:37.900111  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:37.939562  433881 cri.go:89] found id: ""
	I0408 12:49:37.939584  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.939592  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:37.939599  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:37.939668  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:37.977990  433881 cri.go:89] found id: ""
	I0408 12:49:37.978021  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.978033  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:37.978042  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:37.978113  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:38.014506  433881 cri.go:89] found id: ""
	I0408 12:49:38.014537  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.014551  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:38.014559  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:38.014639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:38.049714  433881 cri.go:89] found id: ""
	I0408 12:49:38.049751  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.049764  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:38.049773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:38.049842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:38.089931  433881 cri.go:89] found id: ""
	I0408 12:49:38.089978  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.089987  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:38.089993  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:38.090058  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:38.127674  433881 cri.go:89] found id: ""
	I0408 12:49:38.127715  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.127727  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:38.127738  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:38.127759  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.144170  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:38.144203  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:38.225864  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:38.225885  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:38.225899  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:38.309289  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:38.309334  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:38.351666  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:38.351724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:40.910064  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:40.926264  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:40.926350  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:40.973110  433881 cri.go:89] found id: ""
	I0408 12:49:40.973138  433881 logs.go:276] 0 containers: []
	W0408 12:49:40.973146  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:40.973152  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:40.973209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:41.014643  433881 cri.go:89] found id: ""
	I0408 12:49:41.014675  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.014688  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:41.014696  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:41.014761  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:41.054414  433881 cri.go:89] found id: ""
	I0408 12:49:41.054446  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.054461  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:41.054469  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:41.054543  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:41.094835  433881 cri.go:89] found id: ""
	I0408 12:49:41.094867  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.094876  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:41.094883  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:41.094943  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:41.153654  433881 cri.go:89] found id: ""
	I0408 12:49:41.153684  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.153693  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:41.153699  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:41.153751  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:41.196170  433881 cri.go:89] found id: ""
	I0408 12:49:41.196198  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.196209  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:41.196215  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:41.196277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:41.261374  433881 cri.go:89] found id: ""
	I0408 12:49:41.261412  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.261423  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:41.261432  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:41.261500  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:41.300491  433881 cri.go:89] found id: ""
	I0408 12:49:41.300523  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.300532  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:41.300546  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:41.300559  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:41.373813  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:41.373843  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:41.373860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:41.449773  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:41.449819  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:41.498826  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:41.498862  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:41.552736  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:41.552780  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.431584  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:40.930328  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.106567  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:43.606770  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.718244  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.218855  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.068653  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:44.083655  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:44.083756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:44.124068  433881 cri.go:89] found id: ""
	I0408 12:49:44.124101  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.124113  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:44.124122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:44.124193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:44.160898  433881 cri.go:89] found id: ""
	I0408 12:49:44.160936  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.160950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:44.160958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:44.161032  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:44.196503  433881 cri.go:89] found id: ""
	I0408 12:49:44.196532  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.196540  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:44.196547  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:44.196611  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:44.234604  433881 cri.go:89] found id: ""
	I0408 12:49:44.234644  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.234656  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:44.234664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:44.234720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:44.271243  433881 cri.go:89] found id: ""
	I0408 12:49:44.271283  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.271297  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:44.271306  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:44.271369  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:44.308504  433881 cri.go:89] found id: ""
	I0408 12:49:44.308543  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.308571  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:44.308581  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:44.308644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:44.345662  433881 cri.go:89] found id: ""
	I0408 12:49:44.345703  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.345716  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:44.345725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:44.345786  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:44.384785  433881 cri.go:89] found id: ""
	I0408 12:49:44.384816  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.384826  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:44.384845  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:44.384863  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:44.429253  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:44.429283  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:44.485160  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:44.485201  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:44.502996  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:44.503033  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:44.581921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:44.581946  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:44.581964  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:43.428915  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:45.430859  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.106078  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.108320  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.718065  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.721772  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:47.167101  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:47.183406  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:47.183475  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:47.244266  433881 cri.go:89] found id: ""
	I0408 12:49:47.244295  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.244306  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:47.244314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:47.244379  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:47.285538  433881 cri.go:89] found id: ""
	I0408 12:49:47.285575  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.285588  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:47.285597  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:47.285673  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:47.323634  433881 cri.go:89] found id: ""
	I0408 12:49:47.323670  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.323679  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:47.323707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:47.323791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:47.362737  433881 cri.go:89] found id: ""
	I0408 12:49:47.362774  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.362787  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:47.362795  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:47.362856  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:47.403914  433881 cri.go:89] found id: ""
	I0408 12:49:47.403947  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.403958  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:47.403967  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:47.404035  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:47.445470  433881 cri.go:89] found id: ""
	I0408 12:49:47.445506  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.445521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:47.445530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:47.445598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:47.482633  433881 cri.go:89] found id: ""
	I0408 12:49:47.482669  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.482680  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:47.482689  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:47.482760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:47.521404  433881 cri.go:89] found id: ""
	I0408 12:49:47.521441  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.521456  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:47.521469  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:47.521486  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:47.597247  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:47.597270  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:47.597284  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:47.678765  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:47.678805  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.721463  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:47.721502  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:47.780430  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:47.780472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.295320  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:50.312212  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:50.312293  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:50.355987  433881 cri.go:89] found id: ""
	I0408 12:49:50.356022  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.356034  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:50.356043  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:50.356118  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:50.399662  433881 cri.go:89] found id: ""
	I0408 12:49:50.399714  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.399726  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:50.399735  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:50.399798  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:50.441718  433881 cri.go:89] found id: ""
	I0408 12:49:50.441753  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.441764  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:50.441773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:50.441846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:50.485588  433881 cri.go:89] found id: ""
	I0408 12:49:50.485624  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.485634  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:50.485641  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:50.485703  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:50.524897  433881 cri.go:89] found id: ""
	I0408 12:49:50.524929  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.524937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:50.524943  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:50.524998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:50.561337  433881 cri.go:89] found id: ""
	I0408 12:49:50.561378  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.561388  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:50.561396  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:50.561468  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:50.603052  433881 cri.go:89] found id: ""
	I0408 12:49:50.603082  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.603092  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:50.603101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:50.603169  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:50.643514  433881 cri.go:89] found id: ""
	I0408 12:49:50.643555  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.643566  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:50.643576  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:50.643589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:50.697346  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:50.697388  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.711982  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:50.712015  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:50.796665  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:50.796711  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:50.796731  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:50.873396  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:50.873438  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.432167  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:49.929922  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:50.606575  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.106564  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:51.217123  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.217785  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.217941  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.421458  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:53.435909  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:53.435975  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:53.478018  433881 cri.go:89] found id: ""
	I0408 12:49:53.478052  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.478063  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:53.478072  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:53.478138  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:53.518890  433881 cri.go:89] found id: ""
	I0408 12:49:53.518936  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.518950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:53.518958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:53.519047  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:53.554912  433881 cri.go:89] found id: ""
	I0408 12:49:53.554952  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.554964  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:53.554972  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:53.555042  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:53.592991  433881 cri.go:89] found id: ""
	I0408 12:49:53.593019  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.593028  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:53.593033  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:53.593088  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:53.631215  433881 cri.go:89] found id: ""
	I0408 12:49:53.631255  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.631269  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:53.631277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:53.631351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:53.669189  433881 cri.go:89] found id: ""
	I0408 12:49:53.669228  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.669248  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:53.669258  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:53.669322  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:53.709315  433881 cri.go:89] found id: ""
	I0408 12:49:53.709344  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.709353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:53.709359  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:53.709421  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:53.750869  433881 cri.go:89] found id: ""
	I0408 12:49:53.750910  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.750922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:53.750934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:53.750951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:53.802734  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:53.802782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:53.819509  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:53.819546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:53.888733  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:53.888761  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:53.888782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:53.972408  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:53.972448  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:56.517173  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:56.532357  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:56.532427  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:56.574068  433881 cri.go:89] found id: ""
	I0408 12:49:56.574109  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.574118  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:56.574129  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:56.574276  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:52.429230  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:54.929643  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.607214  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:58.109657  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:57.717805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.219041  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:56.616853  433881 cri.go:89] found id: ""
	I0408 12:49:56.616885  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.616906  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:56.616915  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:56.616988  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:56.659097  433881 cri.go:89] found id: ""
	I0408 12:49:56.659125  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.659133  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:56.659139  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:56.659190  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:56.699222  433881 cri.go:89] found id: ""
	I0408 12:49:56.699262  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.699274  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:56.699283  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:56.699345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:56.747017  433881 cri.go:89] found id: ""
	I0408 12:49:56.747055  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.747068  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:56.747076  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:56.747149  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:56.784988  433881 cri.go:89] found id: ""
	I0408 12:49:56.785028  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.785042  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:56.785058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:56.785126  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:56.830280  433881 cri.go:89] found id: ""
	I0408 12:49:56.830320  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.830332  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:56.830340  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:56.830410  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:56.868643  433881 cri.go:89] found id: ""
	I0408 12:49:56.868678  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.868686  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:56.868697  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:56.868713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:56.922497  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:56.922542  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:56.940550  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:56.940596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:57.018640  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:57.018665  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:57.018680  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.096626  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:57.096681  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:59.638585  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:59.652384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:59.652466  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:59.692778  433881 cri.go:89] found id: ""
	I0408 12:49:59.692823  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.692837  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:59.692846  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:59.692906  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:59.732896  433881 cri.go:89] found id: ""
	I0408 12:49:59.732923  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.732933  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:59.732940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:59.732999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:59.774774  433881 cri.go:89] found id: ""
	I0408 12:49:59.774806  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.774814  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:59.774819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:59.774870  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:59.812919  433881 cri.go:89] found id: ""
	I0408 12:49:59.812959  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.812972  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:59.812980  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:59.813043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:59.848653  433881 cri.go:89] found id: ""
	I0408 12:49:59.848684  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.848695  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:59.848703  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:59.848772  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:59.883495  433881 cri.go:89] found id: ""
	I0408 12:49:59.883525  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.883537  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:59.883546  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:59.883625  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:59.925080  433881 cri.go:89] found id: ""
	I0408 12:49:59.925113  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.925122  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:59.925129  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:59.925182  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:59.967101  433881 cri.go:89] found id: ""
	I0408 12:49:59.967130  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.967142  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:59.967152  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:59.967163  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:00.010507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:00.010546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:00.063139  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:00.063182  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:00.079229  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:00.079266  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:00.155202  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:00.155235  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:00.155253  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.430097  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:59.430226  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.605915  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:03.106990  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.717304  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.717757  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.738934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:02.752509  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:02.752593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:02.791178  433881 cri.go:89] found id: ""
	I0408 12:50:02.791212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.791222  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:02.791229  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:02.791301  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:02.834180  433881 cri.go:89] found id: ""
	I0408 12:50:02.834212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.834225  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:02.834234  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:02.834296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:02.873513  433881 cri.go:89] found id: ""
	I0408 12:50:02.873551  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.873563  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:02.873573  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:02.873651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:02.921329  433881 cri.go:89] found id: ""
	I0408 12:50:02.921371  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.921384  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:02.921392  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:02.921517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:02.959940  433881 cri.go:89] found id: ""
	I0408 12:50:02.959970  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.959980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:02.959988  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:02.960120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:03.001222  433881 cri.go:89] found id: ""
	I0408 12:50:03.001251  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.001259  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:03.001265  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:03.001317  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:03.043627  433881 cri.go:89] found id: ""
	I0408 12:50:03.043656  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.043666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:03.043671  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:03.043750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:03.083603  433881 cri.go:89] found id: ""
	I0408 12:50:03.083640  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.083649  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:03.083660  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:03.083674  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:03.138300  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:03.138343  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:03.153439  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:03.153476  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:03.230230  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:03.230258  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:03.230277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:03.312005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:03.312048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:05.851000  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:05.865533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:05.865601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:05.905449  433881 cri.go:89] found id: ""
	I0408 12:50:05.905485  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.905495  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:05.905501  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:05.905570  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:05.952260  433881 cri.go:89] found id: ""
	I0408 12:50:05.952293  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.952305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:05.952313  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:05.952384  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:05.993398  433881 cri.go:89] found id: ""
	I0408 12:50:05.993430  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.993440  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:05.993446  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:05.993512  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:06.031484  433881 cri.go:89] found id: ""
	I0408 12:50:06.031527  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.031539  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:06.031551  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:06.031613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:06.067855  433881 cri.go:89] found id: ""
	I0408 12:50:06.067897  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.067910  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:06.067920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:06.067992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:06.108905  433881 cri.go:89] found id: ""
	I0408 12:50:06.108937  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.108949  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:06.108958  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:06.109010  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:06.147629  433881 cri.go:89] found id: ""
	I0408 12:50:06.147664  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.147674  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:06.147683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:06.147760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:06.184250  433881 cri.go:89] found id: ""
	I0408 12:50:06.184287  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.184298  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:06.184312  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:06.184329  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:06.239560  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:06.239606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:06.254746  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:06.254777  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:06.330423  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:06.330453  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:06.330471  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:06.410965  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:06.411017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:01.930407  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.429884  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:06.430557  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:05.605804  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.606737  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:10.107370  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.218275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:09.716548  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:08.958108  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:08.972557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:08.972626  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:09.026034  433881 cri.go:89] found id: ""
	I0408 12:50:09.026073  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.026081  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:09.026094  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:09.026145  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:09.063360  433881 cri.go:89] found id: ""
	I0408 12:50:09.063399  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.063411  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:09.063420  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:09.063509  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:09.101002  433881 cri.go:89] found id: ""
	I0408 12:50:09.101030  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.101039  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:09.101045  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:09.101104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:09.140794  433881 cri.go:89] found id: ""
	I0408 12:50:09.140830  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.140843  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:09.140852  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:09.140912  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:09.176889  433881 cri.go:89] found id: ""
	I0408 12:50:09.176927  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.176939  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:09.176947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:09.177013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:09.218687  433881 cri.go:89] found id: ""
	I0408 12:50:09.218719  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.218730  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:09.218739  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:09.218819  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:09.254509  433881 cri.go:89] found id: ""
	I0408 12:50:09.254542  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.254551  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:09.254557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:09.254619  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:09.291313  433881 cri.go:89] found id: ""
	I0408 12:50:09.291341  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.291349  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:09.291359  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:09.291382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:09.342578  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:09.342625  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:09.359207  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:09.359236  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:09.434921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:09.434945  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:09.434962  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:09.526672  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:09.526726  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:08.930029  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.429317  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.107556  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:14.606578  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.717001  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:13.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.719875  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.075428  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:12.089920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:12.089986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:12.128791  433881 cri.go:89] found id: ""
	I0408 12:50:12.128878  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.128895  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:12.128905  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:12.128979  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:12.166911  433881 cri.go:89] found id: ""
	I0408 12:50:12.166939  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.166947  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:12.166954  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:12.167005  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:12.205798  433881 cri.go:89] found id: ""
	I0408 12:50:12.205830  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.205839  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:12.205847  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:12.205905  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:12.242716  433881 cri.go:89] found id: ""
	I0408 12:50:12.242754  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.242764  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:12.242771  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:12.242825  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:12.279061  433881 cri.go:89] found id: ""
	I0408 12:50:12.279098  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.279109  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:12.279118  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:12.279187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:12.319510  433881 cri.go:89] found id: ""
	I0408 12:50:12.319538  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.319547  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:12.319554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:12.319610  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:12.357578  433881 cri.go:89] found id: ""
	I0408 12:50:12.357613  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.357625  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:12.357634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:12.357699  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:12.402895  433881 cri.go:89] found id: ""
	I0408 12:50:12.402931  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.402944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:12.402958  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:12.402975  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:12.455885  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:12.455929  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:12.472119  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:12.472160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:12.551019  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:12.551041  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:12.551054  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:12.633560  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:12.633606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.176459  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:15.191013  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:15.191083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:15.243825  433881 cri.go:89] found id: ""
	I0408 12:50:15.243852  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.243861  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:15.243867  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:15.243918  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:15.282768  433881 cri.go:89] found id: ""
	I0408 12:50:15.282803  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.282816  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:15.282824  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:15.282893  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:15.318418  433881 cri.go:89] found id: ""
	I0408 12:50:15.318447  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.318455  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:15.318463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:15.318540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:15.354071  433881 cri.go:89] found id: ""
	I0408 12:50:15.354109  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.354125  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:15.354133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:15.354205  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:15.397142  433881 cri.go:89] found id: ""
	I0408 12:50:15.397176  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.397185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:15.397191  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:15.397253  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:15.436798  433881 cri.go:89] found id: ""
	I0408 12:50:15.436832  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.436843  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:15.436851  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:15.436916  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:15.475792  433881 cri.go:89] found id: ""
	I0408 12:50:15.475823  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.475836  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:15.475844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:15.475917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:15.526277  433881 cri.go:89] found id: ""
	I0408 12:50:15.526323  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.526335  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:15.526348  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:15.526365  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:15.601590  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:15.601616  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:15.601631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:15.681784  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:15.681842  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.725300  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:15.725345  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:15.778579  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:15.778627  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:13.429712  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.430255  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:17.106153  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:19.607656  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.217812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.719543  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.296690  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:18.310554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:18.310623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:18.350635  433881 cri.go:89] found id: ""
	I0408 12:50:18.350673  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.350685  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:18.350693  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:18.350756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:18.391943  433881 cri.go:89] found id: ""
	I0408 12:50:18.391974  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.391984  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:18.391990  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:18.392059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:18.433191  433881 cri.go:89] found id: ""
	I0408 12:50:18.433226  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.433237  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:18.433246  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:18.433310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:18.471600  433881 cri.go:89] found id: ""
	I0408 12:50:18.471629  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.471641  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:18.471649  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:18.471737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:18.507180  433881 cri.go:89] found id: ""
	I0408 12:50:18.507219  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.507228  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:18.507242  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:18.507307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:18.553894  433881 cri.go:89] found id: ""
	I0408 12:50:18.553924  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.553939  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:18.553947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:18.554013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:18.593823  433881 cri.go:89] found id: ""
	I0408 12:50:18.593860  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.593870  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:18.593878  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:18.593934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:18.636636  433881 cri.go:89] found id: ""
	I0408 12:50:18.636667  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.636679  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:18.636692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:18.636709  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:18.690597  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:18.690640  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:18.706484  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:18.706537  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:18.795390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:18.795419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:18.795434  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:18.873458  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:18.873518  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:21.420942  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:21.436200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:21.436262  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:21.473194  433881 cri.go:89] found id: ""
	I0408 12:50:21.473228  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.473237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:21.473244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:21.473297  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:21.510496  433881 cri.go:89] found id: ""
	I0408 12:50:21.510534  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.510547  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:21.510556  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:21.510618  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:21.550290  433881 cri.go:89] found id: ""
	I0408 12:50:21.550329  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.550337  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:21.550344  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:21.550399  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:21.586192  433881 cri.go:89] found id: ""
	I0408 12:50:21.586229  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.586241  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:21.586252  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:21.586316  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:17.930126  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.430210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:22.107118  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.107812  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:23.217266  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:25.218476  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:21.645888  433881 cri.go:89] found id: ""
	I0408 12:50:21.645925  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.645937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:21.645945  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:21.646012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:21.710384  433881 cri.go:89] found id: ""
	I0408 12:50:21.710416  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.710429  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:21.710437  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:21.710503  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:21.773423  433881 cri.go:89] found id: ""
	I0408 12:50:21.773458  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.773467  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:21.773473  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:21.773536  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:21.814353  433881 cri.go:89] found id: ""
	I0408 12:50:21.814389  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.814401  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:21.814415  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:21.814437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:21.866744  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:21.866783  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:21.883577  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:21.883617  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:21.963339  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:21.963362  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:21.963379  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:22.044959  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:22.045017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:24.589027  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:24.603707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:24.603797  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:24.648525  433881 cri.go:89] found id: ""
	I0408 12:50:24.648566  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.648579  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:24.648587  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:24.648656  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:24.693788  433881 cri.go:89] found id: ""
	I0408 12:50:24.693827  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.693840  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:24.693850  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:24.693925  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:24.734461  433881 cri.go:89] found id: ""
	I0408 12:50:24.734499  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.734507  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:24.734514  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:24.734578  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:24.781723  433881 cri.go:89] found id: ""
	I0408 12:50:24.781759  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.781772  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:24.781780  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:24.781850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:24.823060  433881 cri.go:89] found id: ""
	I0408 12:50:24.823091  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.823101  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:24.823109  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:24.823195  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:24.858847  433881 cri.go:89] found id: ""
	I0408 12:50:24.858887  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.858899  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:24.858913  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:24.858968  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:24.899075  433881 cri.go:89] found id: ""
	I0408 12:50:24.899113  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.899125  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:24.899133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:24.899216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:24.941839  433881 cri.go:89] found id: ""
	I0408 12:50:24.941876  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.941886  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:24.941897  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:24.941911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:24.993358  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:24.993402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:25.010857  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:25.010892  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:25.098985  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:25.099017  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:25.099035  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:25.179115  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:25.179172  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:22.928804  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.930608  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:26.607216  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:28.608092  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.717812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:30.218079  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.726080  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:27.740646  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:27.740739  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:27.781567  433881 cri.go:89] found id: ""
	I0408 12:50:27.781612  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.781623  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:27.781630  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:27.781696  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:27.823034  433881 cri.go:89] found id: ""
	I0408 12:50:27.823077  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.823090  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:27.823099  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:27.823174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:27.862738  433881 cri.go:89] found id: ""
	I0408 12:50:27.862797  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.862822  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:27.862832  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:27.862917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:27.905821  433881 cri.go:89] found id: ""
	I0408 12:50:27.905862  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.905874  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:27.905884  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:27.905954  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:27.949580  433881 cri.go:89] found id: ""
	I0408 12:50:27.949613  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.949625  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:27.949634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:27.949721  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:27.989453  433881 cri.go:89] found id: ""
	I0408 12:50:27.989488  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.989496  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:27.989502  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:27.989560  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:28.031983  433881 cri.go:89] found id: ""
	I0408 12:50:28.032015  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.032027  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:28.032035  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:28.032114  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:28.072851  433881 cri.go:89] found id: ""
	I0408 12:50:28.072884  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.072895  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:28.072910  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:28.072927  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:28.116117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:28.116160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:28.170098  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:28.170142  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:28.184820  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:28.184860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:28.261324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:28.261355  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:28.261384  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:30.837906  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:30.853871  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:30.853969  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:30.896197  433881 cri.go:89] found id: ""
	I0408 12:50:30.896228  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.896237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:30.896244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:30.896296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:30.938689  433881 cri.go:89] found id: ""
	I0408 12:50:30.938726  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.938740  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:30.938758  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:30.938840  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:30.980883  433881 cri.go:89] found id: ""
	I0408 12:50:30.980918  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.980929  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:30.980937  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:30.981008  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:31.018262  433881 cri.go:89] found id: ""
	I0408 12:50:31.018291  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.018305  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:31.018314  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:31.018382  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:31.055397  433881 cri.go:89] found id: ""
	I0408 12:50:31.055430  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.055443  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:31.055452  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:31.055527  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:31.091476  433881 cri.go:89] found id: ""
	I0408 12:50:31.091511  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.091523  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:31.091531  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:31.091583  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:31.130285  433881 cri.go:89] found id: ""
	I0408 12:50:31.130326  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.130337  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:31.130345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:31.130419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:31.168196  433881 cri.go:89] found id: ""
	I0408 12:50:31.168227  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.168236  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:31.168246  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:31.168258  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:31.220612  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:31.220652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:31.236718  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:31.236754  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:31.310550  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:31.310574  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:31.310588  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:31.387376  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:31.387420  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:27.429985  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:29.928718  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:31.106901  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.606293  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:32.717659  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.217468  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.932307  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:33.946664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:33.946754  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:33.991321  433881 cri.go:89] found id: ""
	I0408 12:50:33.991359  433881 logs.go:276] 0 containers: []
	W0408 12:50:33.991371  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:33.991381  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:33.991451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:34.033989  433881 cri.go:89] found id: ""
	I0408 12:50:34.034024  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.034034  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:34.034041  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:34.034125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:34.081140  433881 cri.go:89] found id: ""
	I0408 12:50:34.081183  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.081192  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:34.081199  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:34.081258  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:34.122332  433881 cri.go:89] found id: ""
	I0408 12:50:34.122365  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.122376  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:34.122384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:34.122451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:34.161307  433881 cri.go:89] found id: ""
	I0408 12:50:34.161353  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.161378  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:34.161387  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:34.161460  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:34.199664  433881 cri.go:89] found id: ""
	I0408 12:50:34.199715  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.199727  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:34.199736  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:34.199816  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:34.242044  433881 cri.go:89] found id: ""
	I0408 12:50:34.242077  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.242087  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:34.242094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:34.242159  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:34.277852  433881 cri.go:89] found id: ""
	I0408 12:50:34.277893  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.277908  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:34.277920  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:34.277940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:34.329572  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:34.329614  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:34.343823  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:34.343854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:34.422625  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:34.422652  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:34.422670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:34.504605  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:34.504653  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:31.928982  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.929758  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.930610  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:36.110235  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:38.606389  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.217645  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:39.218104  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.050790  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:37.065111  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:37.065179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:37.108541  433881 cri.go:89] found id: ""
	I0408 12:50:37.108573  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.108583  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:37.108590  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:37.108655  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:37.145207  433881 cri.go:89] found id: ""
	I0408 12:50:37.145241  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.145256  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:37.145264  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:37.145332  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:37.182788  433881 cri.go:89] found id: ""
	I0408 12:50:37.182823  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.182836  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:37.182844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:37.182917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:37.222780  433881 cri.go:89] found id: ""
	I0408 12:50:37.222804  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.222813  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:37.222819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:37.222884  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:37.261653  433881 cri.go:89] found id: ""
	I0408 12:50:37.261703  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.261715  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:37.261725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:37.261795  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:37.300613  433881 cri.go:89] found id: ""
	I0408 12:50:37.300642  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.300651  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:37.300657  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:37.300720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:37.344252  433881 cri.go:89] found id: ""
	I0408 12:50:37.344289  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.344302  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:37.344311  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:37.344380  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:37.382644  433881 cri.go:89] found id: ""
	I0408 12:50:37.382682  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.382695  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:37.382708  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:37.382725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:37.437205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:37.437248  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:37.451772  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:37.451806  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:37.535578  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:37.535604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:37.535618  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:37.618315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:37.618358  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.160025  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:40.173704  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:40.173770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:40.212527  433881 cri.go:89] found id: ""
	I0408 12:50:40.212564  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.212576  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:40.212584  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:40.212648  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:40.250802  433881 cri.go:89] found id: ""
	I0408 12:50:40.250833  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.250841  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:40.250848  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:40.250910  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:40.292534  433881 cri.go:89] found id: ""
	I0408 12:50:40.292565  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.292576  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:40.292584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:40.292641  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:40.329973  433881 cri.go:89] found id: ""
	I0408 12:50:40.330004  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.330017  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:40.330027  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:40.330083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:40.367351  433881 cri.go:89] found id: ""
	I0408 12:50:40.367381  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.367390  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:40.367397  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:40.367462  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:40.404499  433881 cri.go:89] found id: ""
	I0408 12:50:40.404535  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.404546  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:40.404556  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:40.404624  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:40.448208  433881 cri.go:89] found id: ""
	I0408 12:50:40.448244  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.448254  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:40.448263  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:40.448318  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:40.490191  433881 cri.go:89] found id: ""
	I0408 12:50:40.490225  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.490235  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:40.490246  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:40.490262  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:40.507079  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:40.507119  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:40.584844  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:40.584880  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:40.584905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:40.665416  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:40.665461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.710289  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:40.710331  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:38.429765  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.430575  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.607976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.106175  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:45.107548  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:41.716953  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.717149  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.267848  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:43.283094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:43.283192  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:43.321609  433881 cri.go:89] found id: ""
	I0408 12:50:43.321643  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.321655  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:43.321664  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:43.321732  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:43.361550  433881 cri.go:89] found id: ""
	I0408 12:50:43.361587  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.361599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:43.361608  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:43.361686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:43.398332  433881 cri.go:89] found id: ""
	I0408 12:50:43.398373  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.398386  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:43.398394  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:43.398463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:43.436808  433881 cri.go:89] found id: ""
	I0408 12:50:43.436836  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.436844  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:43.436850  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:43.436901  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:43.475475  433881 cri.go:89] found id: ""
	I0408 12:50:43.475512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.475524  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:43.475533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:43.475600  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:43.515481  433881 cri.go:89] found id: ""
	I0408 12:50:43.515512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.515521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:43.515530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:43.515599  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:43.555358  433881 cri.go:89] found id: ""
	I0408 12:50:43.555388  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.555410  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:43.555420  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:43.555476  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:43.590192  433881 cri.go:89] found id: ""
	I0408 12:50:43.590240  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.590253  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:43.590265  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:43.590281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:43.643642  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:43.643699  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:43.659375  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:43.659405  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:43.739721  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:43.739743  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:43.739760  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:43.821107  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:43.821152  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:46.364937  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:46.378208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:46.378295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:46.415217  433881 cri.go:89] found id: ""
	I0408 12:50:46.415251  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.415263  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:46.415272  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:46.415336  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:46.453886  433881 cri.go:89] found id: ""
	I0408 12:50:46.453921  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.453930  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:46.453936  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:46.453992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:46.491443  433881 cri.go:89] found id: ""
	I0408 12:50:46.491475  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.491488  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:46.491496  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:46.491565  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:46.535815  433881 cri.go:89] found id: ""
	I0408 12:50:46.535845  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.535854  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:46.535860  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:46.535921  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:46.577704  433881 cri.go:89] found id: ""
	I0408 12:50:46.577814  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.577826  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:46.577835  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:46.577915  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:42.928908  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:44.929425  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:47.606676  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.608190  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.217528  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:48.717623  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:50.729538  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.624693  433881 cri.go:89] found id: ""
	I0408 12:50:46.624723  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.624731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:46.624738  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:46.624791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:46.659410  433881 cri.go:89] found id: ""
	I0408 12:50:46.659462  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.659474  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:46.659482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:46.659547  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:46.694881  433881 cri.go:89] found id: ""
	I0408 12:50:46.694912  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.694926  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:46.694937  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:46.694954  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:46.751416  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:46.751464  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:46.767739  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:46.767779  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:46.854021  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:46.854050  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:46.854066  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.937214  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:46.937252  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.479829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:49.494083  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:49.494156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:49.532518  433881 cri.go:89] found id: ""
	I0408 12:50:49.532555  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.532563  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:49.532569  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:49.532632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:49.571054  433881 cri.go:89] found id: ""
	I0408 12:50:49.571086  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.571111  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:49.571119  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:49.571199  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:49.607025  433881 cri.go:89] found id: ""
	I0408 12:50:49.607061  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.607071  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:49.607080  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:49.607156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:49.646890  433881 cri.go:89] found id: ""
	I0408 12:50:49.646921  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.646930  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:49.646939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:49.647009  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:49.688671  433881 cri.go:89] found id: ""
	I0408 12:50:49.688707  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.688719  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:49.688728  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:49.688800  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:49.726687  433881 cri.go:89] found id: ""
	I0408 12:50:49.726724  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.726735  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:49.726741  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:49.726808  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:49.767311  433881 cri.go:89] found id: ""
	I0408 12:50:49.767344  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.767353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:49.767360  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:49.767414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:49.803409  433881 cri.go:89] found id: ""
	I0408 12:50:49.803442  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.803452  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:49.803463  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:49.803478  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.842738  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:49.842767  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:49.895264  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:49.895318  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:49.910300  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:49.910332  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:50.005994  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:50.006031  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:50.006048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.929626  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.429810  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.106861  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.608143  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:53.217707  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:55.718120  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.589266  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:52.603202  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:52.603308  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:52.640493  433881 cri.go:89] found id: ""
	I0408 12:50:52.640525  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.640540  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:52.640550  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:52.640613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:52.680230  433881 cri.go:89] found id: ""
	I0408 12:50:52.680271  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.680284  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:52.680293  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:52.680355  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:52.724048  433881 cri.go:89] found id: ""
	I0408 12:50:52.724084  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.724096  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:52.724104  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:52.724171  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:52.776926  433881 cri.go:89] found id: ""
	I0408 12:50:52.776960  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.776973  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:52.776982  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:52.777059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:52.814738  433881 cri.go:89] found id: ""
	I0408 12:50:52.814770  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.814781  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:52.814788  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:52.814842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:52.854463  433881 cri.go:89] found id: ""
	I0408 12:50:52.854501  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.854511  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:52.854521  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:52.854591  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:52.896180  433881 cri.go:89] found id: ""
	I0408 12:50:52.896209  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.896218  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:52.896224  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:52.896279  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:52.931890  433881 cri.go:89] found id: ""
	I0408 12:50:52.931932  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.931944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:52.931956  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:52.931973  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:53.013345  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:53.013368  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:53.013385  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:53.092792  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:53.092834  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:53.142678  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:53.142713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:53.196378  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:53.196429  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:55.713265  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:55.729253  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:55.729341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:55.772259  433881 cri.go:89] found id: ""
	I0408 12:50:55.772303  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.772317  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:55.772325  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:55.772398  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:55.816146  433881 cri.go:89] found id: ""
	I0408 12:50:55.816178  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.816188  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:55.816194  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:55.816247  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:55.857896  433881 cri.go:89] found id: ""
	I0408 12:50:55.857935  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.857947  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:55.857955  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:55.858025  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:55.896337  433881 cri.go:89] found id: ""
	I0408 12:50:55.896374  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.896386  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:55.896395  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:55.896463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:55.936373  433881 cri.go:89] found id: ""
	I0408 12:50:55.936419  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.936430  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:55.936439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:55.936514  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:55.996751  433881 cri.go:89] found id: ""
	I0408 12:50:55.996782  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.996793  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:55.996802  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:55.996866  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:56.038910  433881 cri.go:89] found id: ""
	I0408 12:50:56.038948  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.038956  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:56.038962  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:56.039018  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:56.078147  433881 cri.go:89] found id: ""
	I0408 12:50:56.078185  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.078195  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:56.078206  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:56.078223  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:56.137679  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:56.137725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:56.153067  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:56.153101  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:56.242398  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:56.242422  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:56.242436  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:56.325353  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:56.325402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:51.929891  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.430216  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:57.106572  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.108219  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.216315  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:00.218162  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.867789  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:58.881570  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:58.881640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:58.918941  433881 cri.go:89] found id: ""
	I0408 12:50:58.918971  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.918980  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:58.918987  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:58.919041  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:58.956339  433881 cri.go:89] found id: ""
	I0408 12:50:58.956375  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.956387  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:58.956395  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:58.956448  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:58.998045  433881 cri.go:89] found id: ""
	I0408 12:50:58.998075  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.998087  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:58.998113  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:58.998186  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:59.037694  433881 cri.go:89] found id: ""
	I0408 12:50:59.037724  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.037736  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:59.037744  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:59.037813  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:59.079404  433881 cri.go:89] found id: ""
	I0408 12:50:59.079436  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.079448  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:59.079458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:59.079525  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:59.117535  433881 cri.go:89] found id: ""
	I0408 12:50:59.117566  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.117585  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:59.117593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:59.117661  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:59.163144  433881 cri.go:89] found id: ""
	I0408 12:50:59.163177  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.163190  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:59.163200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:59.163295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:59.201778  433881 cri.go:89] found id: ""
	I0408 12:50:59.201815  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.201827  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:59.201840  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:59.201857  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:59.256688  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:59.256730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:59.272631  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:59.272670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:59.345194  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:59.345219  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:59.345233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:59.420807  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:59.420873  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:56.931254  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.429578  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.606793  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.105581  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:02.218796  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.718232  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.966779  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:01.992790  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:01.992868  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:02.032532  433881 cri.go:89] found id: ""
	I0408 12:51:02.032580  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.032592  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:02.032603  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:02.032684  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:02.070377  433881 cri.go:89] found id: ""
	I0408 12:51:02.070405  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.070412  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:02.070418  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:02.070481  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:02.109543  433881 cri.go:89] found id: ""
	I0408 12:51:02.109569  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.109577  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:02.109584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:02.109639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:02.148009  433881 cri.go:89] found id: ""
	I0408 12:51:02.148049  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.148062  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:02.148078  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:02.148144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:02.184318  433881 cri.go:89] found id: ""
	I0408 12:51:02.184351  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.184362  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:02.184371  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:02.184469  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:02.225491  433881 cri.go:89] found id: ""
	I0408 12:51:02.225534  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.225545  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:02.225554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:02.225628  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:02.269401  433881 cri.go:89] found id: ""
	I0408 12:51:02.269439  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.269447  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:02.269454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:02.269517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:02.310153  433881 cri.go:89] found id: ""
	I0408 12:51:02.310189  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.310197  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:02.310209  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:02.310224  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:02.326077  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:02.326111  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:02.402369  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:02.402394  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:02.402410  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:02.483819  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:02.483866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:02.527581  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:02.527628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:05.083167  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:05.097986  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:05.098063  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:05.139396  433881 cri.go:89] found id: ""
	I0408 12:51:05.139434  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.139446  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:05.139464  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:05.139568  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:05.176882  433881 cri.go:89] found id: ""
	I0408 12:51:05.176918  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.176931  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:05.176940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:05.177012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:05.216426  433881 cri.go:89] found id: ""
	I0408 12:51:05.216459  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.216478  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:05.216486  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:05.216598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:05.254724  433881 cri.go:89] found id: ""
	I0408 12:51:05.254754  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.254762  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:05.254768  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:05.254821  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:05.291361  433881 cri.go:89] found id: ""
	I0408 12:51:05.291388  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.291397  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:05.291403  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:05.291453  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:05.329102  433881 cri.go:89] found id: ""
	I0408 12:51:05.329134  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.329145  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:05.329152  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:05.329216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:05.368614  433881 cri.go:89] found id: ""
	I0408 12:51:05.368657  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.368666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:05.368674  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:05.368727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:05.412151  433881 cri.go:89] found id: ""
	I0408 12:51:05.412182  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.412196  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:05.412208  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:05.412227  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:05.428329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:05.428364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:05.509452  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:05.509481  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:05.509500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:05.586831  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:05.586882  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:05.636175  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:05.636213  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:01.929336  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:03.929754  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.429604  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.106159  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.608247  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:07.216779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:09.217275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.189786  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:08.205609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:08.205686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:08.256556  433881 cri.go:89] found id: ""
	I0408 12:51:08.256586  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.256597  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:08.256607  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:08.256664  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:08.309126  433881 cri.go:89] found id: ""
	I0408 12:51:08.309163  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.309176  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:08.309184  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:08.309259  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:08.350669  433881 cri.go:89] found id: ""
	I0408 12:51:08.350699  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.350708  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:08.350716  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:08.350766  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:08.392122  433881 cri.go:89] found id: ""
	I0408 12:51:08.392156  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.392164  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:08.392171  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:08.392223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:08.435571  433881 cri.go:89] found id: ""
	I0408 12:51:08.435603  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.435616  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:08.435624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:08.435708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.474285  433881 cri.go:89] found id: ""
	I0408 12:51:08.474322  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.474334  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:08.474345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:08.474419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:08.521060  433881 cri.go:89] found id: ""
	I0408 12:51:08.521101  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.521109  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:08.521116  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:08.521185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:08.559967  433881 cri.go:89] found id: ""
	I0408 12:51:08.560013  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.560026  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:08.560051  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:08.560068  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:08.614926  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:08.614966  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:08.639012  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:08.639059  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:08.755572  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:08.755604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:08.755621  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:08.836005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:08.836050  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:11.383048  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:11.397692  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:11.397763  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:11.439445  433881 cri.go:89] found id: ""
	I0408 12:51:11.439482  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.439494  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:11.439503  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:11.439558  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:11.478262  433881 cri.go:89] found id: ""
	I0408 12:51:11.478297  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.478309  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:11.478318  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:11.478392  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:11.518012  433881 cri.go:89] found id: ""
	I0408 12:51:11.518049  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.518063  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:11.518071  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:11.518137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:11.557519  433881 cri.go:89] found id: ""
	I0408 12:51:11.557551  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.557563  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:11.557571  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:11.557644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:11.595494  433881 cri.go:89] found id: ""
	I0408 12:51:11.595528  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.595541  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:11.595550  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:11.595622  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.929238  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:10.929862  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.107603  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.611978  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.718498  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.635667  433881 cri.go:89] found id: ""
	I0408 12:51:11.635719  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.635731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:11.635740  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:11.635806  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:11.675521  433881 cri.go:89] found id: ""
	I0408 12:51:11.675553  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.675562  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:11.675568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:11.675623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:11.720983  433881 cri.go:89] found id: ""
	I0408 12:51:11.721016  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.721029  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:11.721041  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:11.721055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:11.775418  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:11.775462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:11.790019  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:11.790061  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:11.867479  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:11.867512  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:11.867530  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:11.944546  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:11.944594  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:14.487829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:14.501277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:14.501356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:14.539996  433881 cri.go:89] found id: ""
	I0408 12:51:14.540031  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.540043  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:14.540054  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:14.540125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:14.580611  433881 cri.go:89] found id: ""
	I0408 12:51:14.580646  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.580658  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:14.580667  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:14.580729  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:14.623459  433881 cri.go:89] found id: ""
	I0408 12:51:14.623497  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.623509  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:14.623518  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:14.623593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:14.666904  433881 cri.go:89] found id: ""
	I0408 12:51:14.666944  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.666953  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:14.666959  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:14.667012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:14.709136  433881 cri.go:89] found id: ""
	I0408 12:51:14.709169  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.709178  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:14.709183  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:14.709234  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:14.757342  433881 cri.go:89] found id: ""
	I0408 12:51:14.757377  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.757390  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:14.757402  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:14.757477  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:14.795210  433881 cri.go:89] found id: ""
	I0408 12:51:14.795249  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.795262  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:14.795271  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:14.795329  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:14.833782  433881 cri.go:89] found id: ""
	I0408 12:51:14.833813  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.833821  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:14.833831  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:14.833843  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:14.892985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:14.893030  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:14.909567  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:14.909615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:14.988447  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:14.988473  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:14.988494  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:15.068404  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:15.068446  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:12.931867  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:15.430299  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.106552  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.106622  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.108053  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.217595  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.217758  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.220115  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:17.617145  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:17.630439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:17.630520  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:17.672814  433881 cri.go:89] found id: ""
	I0408 12:51:17.672845  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.672853  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:17.672860  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:17.672936  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:17.715344  433881 cri.go:89] found id: ""
	I0408 12:51:17.715378  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.715391  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:17.715399  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:17.715464  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:17.757246  433881 cri.go:89] found id: ""
	I0408 12:51:17.757283  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.757295  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:17.757304  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:17.757373  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:17.798201  433881 cri.go:89] found id: ""
	I0408 12:51:17.798236  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.798245  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:17.798250  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:17.798312  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:17.838243  433881 cri.go:89] found id: ""
	I0408 12:51:17.838280  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.838296  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:17.838305  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:17.838376  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:17.877394  433881 cri.go:89] found id: ""
	I0408 12:51:17.877433  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.877446  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:17.877455  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:17.877522  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:17.917513  433881 cri.go:89] found id: ""
	I0408 12:51:17.917546  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.917557  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:17.917564  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:17.917631  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:17.959806  433881 cri.go:89] found id: ""
	I0408 12:51:17.959841  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.959854  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:17.959872  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:17.959888  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:17.974835  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:17.974866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:18.051066  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:18.051096  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:18.051110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:18.130246  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:18.130294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:18.177977  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:18.178009  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:20.732943  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:20.747177  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:20.747250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:20.793434  433881 cri.go:89] found id: ""
	I0408 12:51:20.793462  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.793472  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:20.793478  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:20.793554  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:20.830880  433881 cri.go:89] found id: ""
	I0408 12:51:20.830915  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.830925  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:20.830931  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:20.830986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:20.865660  433881 cri.go:89] found id: ""
	I0408 12:51:20.865698  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.865710  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:20.865718  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:20.865787  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:20.905977  433881 cri.go:89] found id: ""
	I0408 12:51:20.906009  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.906018  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:20.906023  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:20.906078  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:20.949244  433881 cri.go:89] found id: ""
	I0408 12:51:20.949273  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.949281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:20.949288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:20.949346  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:20.987438  433881 cri.go:89] found id: ""
	I0408 12:51:20.987466  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.987475  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:20.987482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:20.987534  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:21.028061  433881 cri.go:89] found id: ""
	I0408 12:51:21.028106  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.028123  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:21.028130  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:21.028187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:21.065115  433881 cri.go:89] found id: ""
	I0408 12:51:21.065147  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.065160  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:21.065171  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:21.065186  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:21.142100  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:21.142143  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:21.186259  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:21.186294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:21.242038  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:21.242085  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:21.257483  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:21.257526  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:21.336027  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:17.930896  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.430609  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.108741  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.605215  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.716480  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.720217  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:23.836494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:23.850931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:23.851001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:23.889352  433881 cri.go:89] found id: ""
	I0408 12:51:23.889385  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.889397  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:23.889406  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:23.889467  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:23.925240  433881 cri.go:89] found id: ""
	I0408 12:51:23.925271  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.925280  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:23.925286  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:23.925341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:23.965369  433881 cri.go:89] found id: ""
	I0408 12:51:23.965398  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.965410  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:23.965417  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:23.965478  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:24.004828  433881 cri.go:89] found id: ""
	I0408 12:51:24.004864  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.004875  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:24.004882  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:24.004955  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:24.046959  433881 cri.go:89] found id: ""
	I0408 12:51:24.046996  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.047013  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:24.047022  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:24.047104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:24.085408  433881 cri.go:89] found id: ""
	I0408 12:51:24.085447  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.085459  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:24.085468  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:24.085533  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:24.124156  433881 cri.go:89] found id: ""
	I0408 12:51:24.124193  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.124205  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:24.124214  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:24.124280  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:24.159973  433881 cri.go:89] found id: ""
	I0408 12:51:24.160011  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.160023  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:24.160037  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:24.160055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:24.238998  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:24.239047  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:24.282401  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:24.282439  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:24.339279  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:24.339328  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:24.354927  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:24.354965  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:24.432192  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:22.929962  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:25.430340  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.605294  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:28.606623  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:27.218727  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.716524  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.932361  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:26.947709  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:26.947779  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:26.992251  433881 cri.go:89] found id: ""
	I0408 12:51:26.992282  433881 logs.go:276] 0 containers: []
	W0408 12:51:26.992290  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:26.992297  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:26.992366  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:27.033517  433881 cri.go:89] found id: ""
	I0408 12:51:27.033548  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.033560  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:27.033568  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:27.033635  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:27.072593  433881 cri.go:89] found id: ""
	I0408 12:51:27.072628  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.072641  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:27.072650  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:27.072726  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:27.115728  433881 cri.go:89] found id: ""
	I0408 12:51:27.115761  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.115771  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:27.115779  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:27.115846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:27.154218  433881 cri.go:89] found id: ""
	I0408 12:51:27.154254  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.154266  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:27.154274  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:27.154341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:27.193084  433881 cri.go:89] found id: ""
	I0408 12:51:27.193118  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.193134  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:27.193142  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:27.193216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:27.233401  433881 cri.go:89] found id: ""
	I0408 12:51:27.233436  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.233449  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:27.233458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:27.233524  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:27.274272  433881 cri.go:89] found id: ""
	I0408 12:51:27.274307  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.274316  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:27.274325  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:27.274339  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:27.316918  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:27.316956  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:27.371970  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:27.372014  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.387640  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:27.387679  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:27.468583  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:27.468611  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:27.468628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.049078  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:30.063661  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:30.063750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:30.102000  433881 cri.go:89] found id: ""
	I0408 12:51:30.102031  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.102049  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:30.102058  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:30.102120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:30.144972  433881 cri.go:89] found id: ""
	I0408 12:51:30.145001  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.145010  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:30.145017  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:30.145076  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:30.185179  433881 cri.go:89] found id: ""
	I0408 12:51:30.185250  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.185274  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:30.185284  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:30.185356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:30.224138  433881 cri.go:89] found id: ""
	I0408 12:51:30.224169  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.224178  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:30.224185  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:30.224245  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:30.262754  433881 cri.go:89] found id: ""
	I0408 12:51:30.262788  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.262800  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:30.262809  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:30.262872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:30.296574  433881 cri.go:89] found id: ""
	I0408 12:51:30.296608  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.296617  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:30.296624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:30.296685  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:30.337619  433881 cri.go:89] found id: ""
	I0408 12:51:30.337653  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.337665  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:30.337672  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:30.337737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:30.378808  433881 cri.go:89] found id: ""
	I0408 12:51:30.378837  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.378849  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:30.378860  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:30.378876  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:30.462867  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:30.462895  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:30.462911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.549824  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:30.549871  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:30.594270  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:30.594302  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:30.650199  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:30.650247  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.430647  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.929105  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:30.607227  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.106814  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.106890  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:31.716747  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.718962  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.166177  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:33.181168  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:33.181277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:33.220931  433881 cri.go:89] found id: ""
	I0408 12:51:33.220960  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.220970  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:33.220976  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:33.221043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:33.267118  433881 cri.go:89] found id: ""
	I0408 12:51:33.267155  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.267168  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:33.267177  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:33.267250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:33.308486  433881 cri.go:89] found id: ""
	I0408 12:51:33.308522  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.308532  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:33.308540  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:33.308614  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:33.344735  433881 cri.go:89] found id: ""
	I0408 12:51:33.344773  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.344785  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:33.344793  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:33.344857  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:33.384130  433881 cri.go:89] found id: ""
	I0408 12:51:33.384162  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.384175  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:33.384184  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:33.384246  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:33.422187  433881 cri.go:89] found id: ""
	I0408 12:51:33.422224  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.422236  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:33.422244  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:33.422309  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:33.462281  433881 cri.go:89] found id: ""
	I0408 12:51:33.462310  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.462320  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:33.462326  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:33.462412  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:33.501273  433881 cri.go:89] found id: ""
	I0408 12:51:33.501304  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.501315  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:33.501329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:33.501347  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:33.573407  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:33.573435  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:33.573453  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:33.659573  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:33.659615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:33.712568  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:33.712600  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:33.769457  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:33.769500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.285759  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:36.302490  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:36.302576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:36.341170  433881 cri.go:89] found id: ""
	I0408 12:51:36.341204  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.341218  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:36.341227  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:36.341296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:36.380366  433881 cri.go:89] found id: ""
	I0408 12:51:36.380395  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.380403  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:36.380411  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:36.380485  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:36.428755  433881 cri.go:89] found id: ""
	I0408 12:51:36.428786  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.428795  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:36.428801  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:36.428852  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:36.473849  433881 cri.go:89] found id: ""
	I0408 12:51:36.473893  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.473921  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:36.473930  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:36.474001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:36.513922  433881 cri.go:89] found id: ""
	I0408 12:51:36.513967  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.513980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:36.513989  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:36.514059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:36.557731  433881 cri.go:89] found id: ""
	I0408 12:51:36.557768  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.557777  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:36.557784  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:36.557861  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:36.601978  433881 cri.go:89] found id: ""
	I0408 12:51:36.602010  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.602020  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:36.602031  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:36.602099  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:31.930145  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.931893  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.932546  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:37.606783  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:39.607738  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.217708  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:38.717067  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.721801  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.645189  433881 cri.go:89] found id: ""
	I0408 12:51:36.645226  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.645244  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:36.645257  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:36.645276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:36.739293  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:36.739346  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:36.786962  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:36.787001  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:36.842456  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:36.842499  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.857848  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:36.857883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:36.939227  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:39.440047  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:39.456206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:39.456304  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:39.497752  433881 cri.go:89] found id: ""
	I0408 12:51:39.497792  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.497804  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:39.497815  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:39.497882  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:39.536192  433881 cri.go:89] found id: ""
	I0408 12:51:39.536224  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.536237  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:39.536245  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:39.536315  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:39.573874  433881 cri.go:89] found id: ""
	I0408 12:51:39.573917  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.573932  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:39.573939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:39.574004  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:39.614525  433881 cri.go:89] found id: ""
	I0408 12:51:39.614562  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.614577  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:39.614585  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:39.614651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:39.654414  433881 cri.go:89] found id: ""
	I0408 12:51:39.654455  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.654467  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:39.654476  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:39.654549  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:39.691814  433881 cri.go:89] found id: ""
	I0408 12:51:39.691847  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.691860  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:39.691868  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:39.691939  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:39.735572  433881 cri.go:89] found id: ""
	I0408 12:51:39.735609  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.735622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:39.735630  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:39.735707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:39.778827  433881 cri.go:89] found id: ""
	I0408 12:51:39.778860  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.778870  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:39.778881  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:39.778894  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:39.857861  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:39.857903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:39.901597  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:39.901652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:39.955660  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:39.955730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:39.972424  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:39.972461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:40.052884  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:38.429490  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.932035  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:42.106879  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:44.607134  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:41.210350  433557 pod_ready.go:81] duration metric: took 4m0.000311819s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:41.210399  433557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0408 12:51:41.210413  433557 pod_ready.go:38] duration metric: took 4m3.201150727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:41.210464  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:51:41.210520  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:41.210591  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:41.269963  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:41.269999  433557 cri.go:89] found id: ""
	I0408 12:51:41.270010  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:41.270072  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.275411  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:41.275495  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:41.319478  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:41.319517  433557 cri.go:89] found id: ""
	I0408 12:51:41.319529  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:41.319590  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.329956  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:41.330045  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:41.380017  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:41.380049  433557 cri.go:89] found id: ""
	I0408 12:51:41.380061  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:41.380131  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.384973  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:41.385077  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:41.429757  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:41.429786  433557 cri.go:89] found id: ""
	I0408 12:51:41.429798  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:41.429863  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.435404  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:41.435488  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:41.484998  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:41.485031  433557 cri.go:89] found id: ""
	I0408 12:51:41.485042  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:41.485111  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.489802  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:41.489878  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:41.543982  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.544016  433557 cri.go:89] found id: ""
	I0408 12:51:41.544028  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:41.544096  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.548766  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:41.548836  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:41.588398  433557 cri.go:89] found id: ""
	I0408 12:51:41.588425  433557 logs.go:276] 0 containers: []
	W0408 12:51:41.588433  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:41.588439  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:41.588498  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:41.635748  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:41.635771  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:41.635775  433557 cri.go:89] found id: ""
	I0408 12:51:41.635782  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:41.635849  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.641800  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.646173  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:41.646206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.717189  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:41.717228  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:41.779618  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:41.779653  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:41.840050  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:41.840092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:41.855982  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:41.856016  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:42.016416  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:42.016455  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:42.085493  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:42.085538  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:42.132590  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:42.132626  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:42.642069  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:42.642125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:42.708516  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:42.708566  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:42.759072  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:42.759125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:42.810189  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:42.810242  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:42.855931  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:42.855971  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.396658  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.414640  433557 api_server.go:72] duration metric: took 4m14.728700184s to wait for apiserver process to appear ...
	I0408 12:51:45.414671  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:51:45.414714  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.414772  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.460983  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:45.461012  433557 cri.go:89] found id: ""
	I0408 12:51:45.461023  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:45.461102  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.466928  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.467037  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.516723  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:45.516746  433557 cri.go:89] found id: ""
	I0408 12:51:45.516755  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:45.516813  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.521315  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.521413  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.560838  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.560865  433557 cri.go:89] found id: ""
	I0408 12:51:45.560876  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:45.560926  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.565852  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.565937  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.610154  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:45.610175  433557 cri.go:89] found id: ""
	I0408 12:51:45.610183  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:45.610229  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.615014  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.615098  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.658261  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:45.658292  433557 cri.go:89] found id: ""
	I0408 12:51:45.658304  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:45.658367  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.663148  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.663242  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:45.708805  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.708838  433557 cri.go:89] found id: ""
	I0408 12:51:45.708850  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:45.708906  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.713733  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:45.713800  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:45.763432  433557 cri.go:89] found id: ""
	I0408 12:51:45.763465  433557 logs.go:276] 0 containers: []
	W0408 12:51:45.763477  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:45.763486  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:45.763555  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:45.808689  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:45.808711  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.808715  433557 cri.go:89] found id: ""
	I0408 12:51:45.808723  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:45.808782  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.813386  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.818556  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:45.818589  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:42.553021  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:42.569100  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:42.569174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:42.612835  433881 cri.go:89] found id: ""
	I0408 12:51:42.612870  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.612882  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:42.612891  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:42.612965  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:42.653224  433881 cri.go:89] found id: ""
	I0408 12:51:42.653266  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.653276  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:42.653285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:42.653351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:42.703612  433881 cri.go:89] found id: ""
	I0408 12:51:42.703648  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.703658  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:42.703664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:42.703756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:42.749765  433881 cri.go:89] found id: ""
	I0408 12:51:42.749799  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.749810  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:42.749818  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:42.749894  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:42.794008  433881 cri.go:89] found id: ""
	I0408 12:51:42.794042  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.794054  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:42.794064  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:42.794132  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:42.838099  433881 cri.go:89] found id: ""
	I0408 12:51:42.838134  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.838146  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:42.838154  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:42.838223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:42.883552  433881 cri.go:89] found id: ""
	I0408 12:51:42.883589  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.883602  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:42.883615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:42.883712  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:42.922871  433881 cri.go:89] found id: ""
	I0408 12:51:42.922899  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.922910  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:42.922922  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:42.922958  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:42.979842  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:42.979885  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:42.995164  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:42.995198  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:43.075880  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:43.075906  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:43.075940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:43.164047  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:43.164113  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:45.733586  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.749054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.749158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.793132  433881 cri.go:89] found id: ""
	I0408 12:51:45.793169  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.793181  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:45.793189  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.793257  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.834562  433881 cri.go:89] found id: ""
	I0408 12:51:45.834597  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.834608  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:45.834616  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.834686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.876365  433881 cri.go:89] found id: ""
	I0408 12:51:45.876404  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.876415  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:45.876424  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.876489  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.926205  433881 cri.go:89] found id: ""
	I0408 12:51:45.926241  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.926254  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:45.926262  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.926331  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.969462  433881 cri.go:89] found id: ""
	I0408 12:51:45.969494  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.969506  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:45.969513  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.969582  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:46.011980  433881 cri.go:89] found id: ""
	I0408 12:51:46.012008  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.012031  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:46.012040  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:46.012098  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:46.054484  433881 cri.go:89] found id: ""
	I0408 12:51:46.054522  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.054533  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:46.054542  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:46.054609  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:46.094438  433881 cri.go:89] found id: ""
	I0408 12:51:46.094468  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.094477  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:46.094486  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.094503  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:46.186390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:46.186415  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.186437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.283200  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.283240  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:46.336507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.336544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.392178  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.392221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:43.429577  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:45.431057  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:47.106109  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:48.599265  433674 pod_ready.go:81] duration metric: took 4m0.000260398s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:48.599302  433674 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:51:48.599335  433674 pod_ready.go:38] duration metric: took 4m13.995684279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:48.599373  433674 kubeadm.go:591] duration metric: took 4m22.072516751s to restartPrimaryControlPlane
	W0408 12:51:48.599529  433674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:48.599619  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:45.864458  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:45.864503  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.907964  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:45.908000  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.980082  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:45.980123  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:46.041294  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:46.041330  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:46.102117  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:46.102171  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:46.188553  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:46.188583  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:46.234191  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:46.234229  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:46.281240  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.281273  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.721047  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.721092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.781387  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.781429  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:46.797003  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.797043  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:46.917073  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.917109  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:49.481948  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:51:49.488261  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:51:49.489694  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:51:49.489726  433557 api_server.go:131] duration metric: took 4.075047023s to wait for apiserver health ...
	I0408 12:51:49.489737  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:51:49.489772  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:49.489845  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:49.535955  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.535980  433557 cri.go:89] found id: ""
	I0408 12:51:49.535990  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:49.536052  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.543143  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:49.543239  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.590041  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:49.590075  433557 cri.go:89] found id: ""
	I0408 12:51:49.590087  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:49.590155  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.595726  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.595803  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.645009  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:49.645046  433557 cri.go:89] found id: ""
	I0408 12:51:49.645057  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:49.645110  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.650243  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.650329  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.693859  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:49.693882  433557 cri.go:89] found id: ""
	I0408 12:51:49.693895  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:49.693972  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.699620  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.699709  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.755614  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:49.755646  433557 cri.go:89] found id: ""
	I0408 12:51:49.755657  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:49.755739  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.761838  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.761913  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.808919  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:49.808950  433557 cri.go:89] found id: ""
	I0408 12:51:49.808961  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:49.809040  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.813965  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.814046  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.859700  433557 cri.go:89] found id: ""
	I0408 12:51:49.859737  433557 logs.go:276] 0 containers: []
	W0408 12:51:49.859748  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.859757  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:49.859832  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:49.908020  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:49.908044  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:49.908050  433557 cri.go:89] found id: ""
	I0408 12:51:49.908060  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:49.908129  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.913034  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.919193  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:49.919233  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.984657  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.984704  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:50.003487  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:50.003526  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:50.139417  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:50.139481  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:50.240166  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:50.240206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:50.288776  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:50.288823  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:50.339222  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:50.339252  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:50.402263  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:50.402308  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:50.461894  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:50.461946  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:50.507329  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:50.507373  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:50.576851  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:50.576894  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:48.908956  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:48.932321  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:48.932414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:48.988509  433881 cri.go:89] found id: ""
	I0408 12:51:48.988542  433881 logs.go:276] 0 containers: []
	W0408 12:51:48.988554  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:48.988563  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:48.988632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.026573  433881 cri.go:89] found id: ""
	I0408 12:51:49.026605  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.026613  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:49.026618  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.026681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.072625  433881 cri.go:89] found id: ""
	I0408 12:51:49.072661  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.072675  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:49.072684  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.072748  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.120630  433881 cri.go:89] found id: ""
	I0408 12:51:49.120662  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.120674  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:49.120683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.120743  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.169189  433881 cri.go:89] found id: ""
	I0408 12:51:49.169218  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.169231  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:49.169239  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.169307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.216077  433881 cri.go:89] found id: ""
	I0408 12:51:49.216115  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.216128  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:49.216141  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.216209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.258519  433881 cri.go:89] found id: ""
	I0408 12:51:49.258556  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.258568  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.258576  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:49.258658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:49.298058  433881 cri.go:89] found id: ""
	I0408 12:51:49.298092  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.298103  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:49.298117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:49.298133  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:49.351961  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.352020  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:49.369774  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:49.369822  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:49.465570  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:49.465598  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:49.465616  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:49.551701  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:49.551753  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:47.932221  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.430702  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.947824  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:50.947878  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:51.007034  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:51.007084  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:53.563768  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:51:53.563811  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.563818  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.563824  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.563829  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.563835  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.563840  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.563850  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.563857  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.563870  433557 system_pods.go:74] duration metric: took 4.074125222s to wait for pod list to return data ...
	I0408 12:51:53.563884  433557 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:51:53.566991  433557 default_sa.go:45] found service account: "default"
	I0408 12:51:53.567015  433557 default_sa.go:55] duration metric: took 3.122873ms for default service account to be created ...
	I0408 12:51:53.567024  433557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:51:53.574517  433557 system_pods.go:86] 8 kube-system pods found
	I0408 12:51:53.574558  433557 system_pods.go:89] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.574565  433557 system_pods.go:89] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.574570  433557 system_pods.go:89] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.574575  433557 system_pods.go:89] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.574581  433557 system_pods.go:89] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.574587  433557 system_pods.go:89] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.574598  433557 system_pods.go:89] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.574605  433557 system_pods.go:89] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.574616  433557 system_pods.go:126] duration metric: took 7.585497ms to wait for k8s-apps to be running ...
	I0408 12:51:53.574629  433557 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:51:53.574720  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:53.597605  433557 system_svc.go:56] duration metric: took 22.957663ms WaitForService to wait for kubelet
	I0408 12:51:53.597658  433557 kubeadm.go:576] duration metric: took 4m22.91172229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:51:53.597683  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:51:53.601940  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:51:53.601992  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:51:53.602009  433557 node_conditions.go:105] duration metric: took 4.320913ms to run NodePressure ...
	I0408 12:51:53.602028  433557 start.go:240] waiting for startup goroutines ...
	I0408 12:51:53.602040  433557 start.go:245] waiting for cluster config update ...
	I0408 12:51:53.602060  433557 start.go:254] writing updated cluster config ...
	I0408 12:51:53.602426  433557 ssh_runner.go:195] Run: rm -f paused
	I0408 12:51:53.660257  433557 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0408 12:51:53.662533  433557 out.go:177] * Done! kubectl is now configured to use "no-preload-135234" cluster and "default" namespace by default
	I0408 12:51:52.104186  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:52.125116  433881 kubeadm.go:591] duration metric: took 4m3.004969382s to restartPrimaryControlPlane
	W0408 12:51:52.125203  433881 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:52.125233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:54.046318  433881 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.921055247s)
	I0408 12:51:54.046411  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:54.061948  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:51:54.073014  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:51:54.083545  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:51:54.083566  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:51:54.083623  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:51:54.093457  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:51:54.093541  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:51:54.104924  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:51:54.114649  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:51:54.114733  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:51:54.125143  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.135209  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:51:54.135283  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.146586  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:51:54.157676  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:51:54.157740  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:51:54.168585  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:51:54.411949  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:51:52.434513  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:54.930343  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:57.432046  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:59.436031  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:01.930142  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:03.931249  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:06.429806  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:08.929311  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:10.929707  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:13.430287  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:15.430449  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:17.933664  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:20.428983  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:21.300307  433674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.700649463s)
	I0408 12:52:21.300429  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:21.321628  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:21.334359  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:21.345697  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:21.345755  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:21.345804  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:52:21.356798  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:21.356868  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:21.368622  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:52:21.379589  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:21.379676  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:21.391211  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.401783  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:21.401874  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.413655  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:52:21.424585  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:21.424673  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:21.436887  433674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:21.495891  433674 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:21.496022  433674 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:21.667820  433674 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:21.667973  433674 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:21.668100  433674 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:21.904532  433674 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:21.906631  433674 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:21.906736  433674 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:21.906833  433674 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:21.906962  433674 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:21.907084  433674 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:21.907206  433674 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:21.907283  433674 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:21.907372  433674 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:21.907705  433674 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:21.908164  433674 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:21.908536  433674 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:21.908852  433674 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:21.908942  433674 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:22.096319  433674 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:22.286425  433674 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:22.442534  433674 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:22.542901  433674 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:22.959098  433674 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:22.959656  433674 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:22.962359  433674 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:22.965011  433674 out.go:204]   - Booting up control plane ...
	I0408 12:52:22.965148  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:22.965830  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:22.966718  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:22.987425  433674 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:22.988618  433674 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:22.988690  433674 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:23.134634  433674 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:52:22.429735  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.431237  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.923026  433439 pod_ready.go:81] duration metric: took 4m0.000804438s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	E0408 12:52:24.923079  433439 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:52:24.923103  433439 pod_ready.go:38] duration metric: took 4m6.498748448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:24.923143  433439 kubeadm.go:591] duration metric: took 4m14.484131334s to restartPrimaryControlPlane
	W0408 12:52:24.923222  433439 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:52:24.923260  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:52:29.641484  433674 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.505486 seconds
	I0408 12:52:29.659612  433674 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:52:29.683882  433674 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:52:30.237806  433674 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:52:30.238135  433674 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-488947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:52:30.755095  433674 kubeadm.go:309] [bootstrap-token] Using token: kwhj7g.e2hm9yupaxknooep
	I0408 12:52:30.756904  433674 out.go:204]   - Configuring RBAC rules ...
	I0408 12:52:30.757044  433674 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:52:30.763322  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:52:30.776489  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:52:30.780180  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:52:30.784949  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:52:30.789409  433674 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:52:30.810228  433674 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:52:31.071672  433674 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:52:31.180390  433674 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:52:31.180421  433674 kubeadm.go:309] 
	I0408 12:52:31.180493  433674 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:52:31.180504  433674 kubeadm.go:309] 
	I0408 12:52:31.180626  433674 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:52:31.180652  433674 kubeadm.go:309] 
	I0408 12:52:31.180682  433674 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:52:31.180758  433674 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:52:31.180823  433674 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:52:31.180835  433674 kubeadm.go:309] 
	I0408 12:52:31.180898  433674 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:52:31.180908  433674 kubeadm.go:309] 
	I0408 12:52:31.180967  433674 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:52:31.180978  433674 kubeadm.go:309] 
	I0408 12:52:31.181069  433674 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:52:31.181200  433674 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:52:31.181301  433674 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:52:31.181312  433674 kubeadm.go:309] 
	I0408 12:52:31.181446  433674 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:52:31.181564  433674 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:52:31.181577  433674 kubeadm.go:309] 
	I0408 12:52:31.181706  433674 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.181869  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:52:31.181923  433674 kubeadm.go:309] 	--control-plane 
	I0408 12:52:31.181933  433674 kubeadm.go:309] 
	I0408 12:52:31.182039  433674 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:52:31.182055  433674 kubeadm.go:309] 
	I0408 12:52:31.182167  433674 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.182323  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:52:31.182467  433674 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:52:31.182492  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:52:31.182502  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:52:31.184299  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:52:31.185716  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:52:31.217708  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:52:31.277627  433674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:52:31.277716  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:31.277740  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-488947 minikube.k8s.io/updated_at=2024_04_08T12_52_31_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=embed-certs-488947 minikube.k8s.io/primary=true
	I0408 12:52:31.591490  433674 ops.go:34] apiserver oom_adj: -16
	I0408 12:52:31.591651  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.092642  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.591845  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.092645  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.592585  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.092066  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.592232  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.091882  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.591794  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.091849  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.592616  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.091816  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.091756  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.592114  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.092524  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.591838  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.091853  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.591747  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.092421  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.592611  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.092369  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.092638  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.592549  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.091831  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.592358  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.799776  433674 kubeadm.go:1107] duration metric: took 13.522136387s to wait for elevateKubeSystemPrivileges
	W0408 12:52:44.799833  433674 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:52:44.799845  433674 kubeadm.go:393] duration metric: took 5m18.325910079s to StartCluster
	I0408 12:52:44.799870  433674 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.799981  433674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:52:44.802396  433674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.802704  433674 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:52:44.804525  433674 out.go:177] * Verifying Kubernetes components...
	I0408 12:52:44.802776  433674 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:52:44.802921  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:52:44.805724  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:52:44.805735  433674 addons.go:69] Setting metrics-server=true in profile "embed-certs-488947"
	I0408 12:52:44.805751  433674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-488947"
	I0408 12:52:44.805777  433674 addons.go:234] Setting addon metrics-server=true in "embed-certs-488947"
	W0408 12:52:44.805792  433674 addons.go:243] addon metrics-server should already be in state true
	I0408 12:52:44.805824  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805727  433674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-488947"
	I0408 12:52:44.805869  433674 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-488947"
	W0408 12:52:44.805883  433674 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:52:44.805915  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805834  433674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-488947"
	I0408 12:52:44.806260  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806262  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806266  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806286  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806288  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806326  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.824170  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I0408 12:52:44.824862  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.825517  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.825547  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.826049  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.826714  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.826752  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.827345  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0408 12:52:44.827569  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0408 12:52:44.828195  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828218  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828860  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.828892  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829023  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.829040  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829499  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829541  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829687  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.830201  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.830247  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.834128  433674 addons.go:234] Setting addon default-storageclass=true in "embed-certs-488947"
	W0408 12:52:44.834156  433674 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:52:44.834189  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.834569  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.834611  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.845829  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0408 12:52:44.846556  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.847545  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.847571  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.848210  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.848478  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.850407  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.850783  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0408 12:52:44.853144  433674 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:52:44.851322  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.854214  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0408 12:52:44.855198  433674 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:44.855222  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:52:44.855245  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.855434  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.855766  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855797  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.855936  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855956  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.856190  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856264  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856382  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.856937  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.856973  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.857994  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.859623  433674 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:52:44.860991  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:52:44.861012  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:52:44.858778  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.861032  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.861051  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.861072  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.859293  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.861282  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.861617  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.861817  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.863813  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864274  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.864299  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864483  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.864681  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.864846  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.865028  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.874355  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0408 12:52:44.874834  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.875388  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.875418  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.875775  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.875967  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.877519  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.877786  433674 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:44.877803  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:52:44.877818  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.880463  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.880846  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.880874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.881040  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.881234  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.881615  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.881753  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:45.057304  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:52:45.082702  433674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.091955  433674 node_ready.go:49] node "embed-certs-488947" has status "Ready":"True"
	I0408 12:52:45.091994  433674 node_ready.go:38] duration metric: took 9.246027ms for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.092007  433674 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:45.101654  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:45.237037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:52:45.237068  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:52:45.238421  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:45.274088  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:45.295037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:52:45.295078  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:52:45.397474  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:45.397504  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:52:45.431610  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:46.375681  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101541881s)
	I0408 12:52:46.375827  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.375862  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376204  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.376244  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.137166571s)
	I0408 12:52:46.376271  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.376291  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.376309  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376313  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376319  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.377184  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377205  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377613  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.377680  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377699  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377709  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.377747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.378168  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.378182  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.413325  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.413361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.413757  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.413780  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.679538  433674 pod_ready.go:92] pod "coredns-76f75df574-4gdp4" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.679577  433674 pod_ready.go:81] duration metric: took 1.577895468s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.679596  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760007  433674 pod_ready.go:92] pod "coredns-76f75df574-r5rxq" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.760043  433674 pod_ready.go:81] duration metric: took 80.437752ms for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760059  433674 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.803070  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.371401052s)
	I0408 12:52:46.803136  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803150  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803496  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803519  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803530  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803539  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803846  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803862  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803882  433674 addons.go:470] Verifying addon metrics-server=true in "embed-certs-488947"
	I0408 12:52:46.806034  433674 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0408 12:52:46.804164  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.807597  433674 pod_ready.go:81] duration metric: took 47.521367ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807622  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807621  433674 addons.go:505] duration metric: took 2.004847213s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0408 12:52:46.827049  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.827075  433674 pod_ready.go:81] duration metric: took 19.440746ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.827086  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848718  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.848759  433674 pod_ready.go:81] duration metric: took 21.664037ms for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848775  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087350  433674 pod_ready.go:92] pod "kube-proxy-mqrtp" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.087387  433674 pod_ready.go:81] duration metric: took 238.602902ms for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087403  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486822  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.486863  433674 pod_ready.go:81] duration metric: took 399.44977ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486875  433674 pod_ready.go:38] duration metric: took 2.394853452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:47.486895  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:52:47.486967  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:52:47.517426  433674 api_server.go:72] duration metric: took 2.714672176s to wait for apiserver process to appear ...
	I0408 12:52:47.517461  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:52:47.517492  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:52:47.527074  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:52:47.528230  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:52:47.528285  433674 api_server.go:131] duration metric: took 10.815426ms to wait for apiserver health ...
	I0408 12:52:47.528296  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:52:47.692054  433674 system_pods.go:59] 9 kube-system pods found
	I0408 12:52:47.692091  433674 system_pods.go:61] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:47.692096  433674 system_pods.go:61] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:47.692101  433674 system_pods.go:61] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:47.692105  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:47.692109  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:47.692112  433674 system_pods.go:61] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:47.692116  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:47.692123  433674 system_pods.go:61] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:47.692129  433674 system_pods.go:61] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:47.692137  433674 system_pods.go:74] duration metric: took 163.833038ms to wait for pod list to return data ...
	I0408 12:52:47.692146  433674 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:52:47.886668  433674 default_sa.go:45] found service account: "default"
	I0408 12:52:47.886695  433674 default_sa.go:55] duration metric: took 194.543392ms for default service account to be created ...
	I0408 12:52:47.886707  433674 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:52:48.090174  433674 system_pods.go:86] 9 kube-system pods found
	I0408 12:52:48.090212  433674 system_pods.go:89] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:48.090217  433674 system_pods.go:89] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:48.090222  433674 system_pods.go:89] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:48.090226  433674 system_pods.go:89] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:48.090232  433674 system_pods.go:89] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:48.090236  433674 system_pods.go:89] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:48.090240  433674 system_pods.go:89] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:48.090248  433674 system_pods.go:89] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:48.090253  433674 system_pods.go:89] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:48.090260  433674 system_pods.go:126] duration metric: took 203.547421ms to wait for k8s-apps to be running ...
	I0408 12:52:48.090266  433674 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:52:48.090312  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:48.106285  433674 system_svc.go:56] duration metric: took 15.998172ms WaitForService to wait for kubelet
	I0408 12:52:48.106322  433674 kubeadm.go:576] duration metric: took 3.303579521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:52:48.106345  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:52:48.287351  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:52:48.287381  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:52:48.287392  433674 node_conditions.go:105] duration metric: took 181.042972ms to run NodePressure ...
	I0408 12:52:48.287403  433674 start.go:240] waiting for startup goroutines ...
	I0408 12:52:48.287410  433674 start.go:245] waiting for cluster config update ...
	I0408 12:52:48.287419  433674 start.go:254] writing updated cluster config ...
	I0408 12:52:48.287738  433674 ssh_runner.go:195] Run: rm -f paused
	I0408 12:52:48.341532  433674 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:52:48.343890  433674 out.go:177] * Done! kubectl is now configured to use "embed-certs-488947" cluster and "default" namespace by default
	I0408 12:52:57.475303  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.552015668s)
	I0408 12:52:57.475390  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:57.492800  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:57.507211  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:57.520174  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:57.520203  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:57.520267  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:52:57.531854  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:57.531939  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:57.543764  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:52:57.555407  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:57.555479  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:57.569452  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.580478  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:57.580575  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.591819  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:52:57.602496  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:57.602589  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:57.613811  433439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:57.669998  433439 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:57.670137  433439 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:57.830674  433439 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:57.830802  433439 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:57.830882  433439 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:58.090382  433439 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:58.092626  433439 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:58.092733  433439 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:58.092809  433439 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:58.092906  433439 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:58.093027  433439 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:58.093130  433439 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:58.093202  433439 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:58.093547  433439 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:58.093941  433439 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:58.094342  433439 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:58.094708  433439 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:58.095077  433439 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:58.095159  433439 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:58.328890  433439 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:58.516475  433439 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:58.830765  433439 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:59.052737  433439 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:59.306668  433439 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:59.307305  433439 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:59.312102  433439 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:59.314983  433439 out.go:204]   - Booting up control plane ...
	I0408 12:52:59.315104  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:59.315191  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:59.315305  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:59.334624  433439 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:59.335637  433439 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:59.335713  433439 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:59.486408  433439 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:05.490227  433439 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002996 seconds
	I0408 12:53:05.526221  433439 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:53:05.553758  433439 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:53:06.101116  433439 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:53:06.101340  433439 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-527454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:53:06.616939  433439 kubeadm.go:309] [bootstrap-token] Using token: oe56hb.uz3a0dd96vnry1w3
	I0408 12:53:06.618840  433439 out.go:204]   - Configuring RBAC rules ...
	I0408 12:53:06.619038  433439 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:53:06.625364  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:53:06.638696  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:53:06.643811  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:53:06.647895  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:53:06.651857  433439 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:53:06.677056  433439 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:53:06.939588  433439 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:53:07.038633  433439 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:53:07.041464  433439 kubeadm.go:309] 
	I0408 12:53:07.041565  433439 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:53:07.041578  433439 kubeadm.go:309] 
	I0408 12:53:07.041680  433439 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:53:07.041699  433439 kubeadm.go:309] 
	I0408 12:53:07.041723  433439 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:53:07.041824  433439 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:53:07.041906  433439 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:53:07.041917  433439 kubeadm.go:309] 
	I0408 12:53:07.041988  433439 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:53:07.041998  433439 kubeadm.go:309] 
	I0408 12:53:07.042103  433439 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:53:07.042123  433439 kubeadm.go:309] 
	I0408 12:53:07.042168  433439 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:53:07.042253  433439 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:53:07.042351  433439 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:53:07.042361  433439 kubeadm.go:309] 
	I0408 12:53:07.042588  433439 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:53:07.042708  433439 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:53:07.042719  433439 kubeadm.go:309] 
	I0408 12:53:07.042823  433439 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.042959  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:53:07.042994  433439 kubeadm.go:309] 	--control-plane 
	I0408 12:53:07.043003  433439 kubeadm.go:309] 
	I0408 12:53:07.043127  433439 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:53:07.043143  433439 kubeadm.go:309] 
	I0408 12:53:07.043253  433439 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.043400  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:53:07.043583  433439 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:53:07.043608  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:53:07.043620  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:53:07.045283  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:53:07.046614  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:53:07.074907  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:53:07.107168  433439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:53:07.107232  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.107256  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-527454 minikube.k8s.io/updated_at=2024_04_08T12_53_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=default-k8s-diff-port-527454 minikube.k8s.io/primary=true
	I0408 12:53:07.208551  433439 ops.go:34] apiserver oom_adj: -16
	I0408 12:53:07.395206  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.896090  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.396097  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.896240  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.395654  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.895751  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.396242  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.896204  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.395766  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.895555  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.396014  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.896092  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.395507  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.895832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.395237  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.895333  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.396191  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.895561  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.395832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.895785  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.395460  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.895320  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.395826  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.896002  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.396326  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.514796  433439 kubeadm.go:1107] duration metric: took 12.407623504s to wait for elevateKubeSystemPrivileges
	W0408 12:53:19.514843  433439 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:53:19.514856  433439 kubeadm.go:393] duration metric: took 5m9.134867072s to StartCluster
	I0408 12:53:19.514882  433439 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.514981  433439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:53:19.516708  433439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.516988  433439 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:53:19.518597  433439 out.go:177] * Verifying Kubernetes components...
	I0408 12:53:19.517057  433439 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:53:19.517238  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:53:19.519990  433439 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520011  433439 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:19.520003  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0408 12:53:19.520052  433439 addons.go:243] addon metrics-server should already be in state true
	I0408 12:53:19.520095  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.519995  433439 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520161  433439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.520247  433439 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:53:19.520274  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.520519  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520521  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520555  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520616  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520639  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520556  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.536637  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0408 12:53:19.536896  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0408 12:53:19.536997  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0408 12:53:19.537194  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537369  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537453  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537748  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537772  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.537883  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537895  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538210  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538262  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538352  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.538372  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538815  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.538818  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538875  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.539030  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.542211  433439 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.542228  433439 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:53:19.542252  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.542841  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.542871  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.556920  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0408 12:53:19.557552  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0408 12:53:19.557712  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.557930  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.558468  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.558482  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.559174  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.559474  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.559852  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.559881  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.560358  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.561323  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.561357  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.561606  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.563808  433439 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:53:19.565205  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:53:19.565224  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:53:19.565252  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.565914  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0408 12:53:19.566710  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.567503  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.567521  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.568270  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.568656  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.568664  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.569109  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.569136  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.569294  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.569451  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.569707  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.569894  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.570455  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.572243  433439 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:53:19.573764  433439 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:19.573784  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:53:19.573804  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.576844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577310  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.577380  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577547  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.577851  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.578009  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.578154  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.579402  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0408 12:53:19.579860  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.580428  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.580448  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.581001  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.581202  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.582638  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.582913  433439 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:19.582929  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:53:19.582949  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.585995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586456  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.586488  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586665  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.586845  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.586974  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.587077  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.782606  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:53:19.822413  433439 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833467  433439 node_ready.go:49] node "default-k8s-diff-port-527454" has status "Ready":"True"
	I0408 12:53:19.833493  433439 node_ready.go:38] duration metric: took 11.040127ms for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833503  433439 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:19.845052  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:19.990826  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:20.027800  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:53:20.027827  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:53:20.066661  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:20.168240  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:53:20.168271  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:53:20.327307  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.327336  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:53:20.390128  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.455235  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455265  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455575  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455607  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.455618  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455628  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455912  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455929  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.494751  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.494778  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.495103  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.495126  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.495132  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.454862  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.388156991s)
	I0408 12:53:21.454942  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.454956  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455313  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.455368  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455377  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455386  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.455395  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455729  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455753  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455797  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.591677  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.201496165s)
	I0408 12:53:21.591745  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.591760  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.592145  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592183  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592199  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.592214  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592484  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592501  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592513  433439 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:21.594462  433439 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0408 12:53:21.595731  433439 addons.go:505] duration metric: took 2.078676652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0408 12:53:21.852741  433439 pod_ready.go:102] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"False"
	I0408 12:53:22.375241  433439 pod_ready.go:92] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.375283  433439 pod_ready.go:81] duration metric: took 2.53020032s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.375298  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.391968  433439 pod_ready.go:92] pod "coredns-76f75df574-z56lf" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.392003  433439 pod_ready.go:81] duration metric: took 16.695581ms for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.392018  433439 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398659  433439 pod_ready.go:92] pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.398688  433439 pod_ready.go:81] duration metric: took 6.657546ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398699  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407214  433439 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.407241  433439 pod_ready.go:81] duration metric: took 8.535246ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407252  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416605  433439 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.416632  433439 pod_ready.go:81] duration metric: took 9.374648ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416644  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750191  433439 pod_ready.go:92] pod "kube-proxy-tlhff" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.750220  433439 pod_ready.go:81] duration metric: took 333.570363ms for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750231  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.148980  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:23.149009  433439 pod_ready.go:81] duration metric: took 398.771226ms for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.149018  433439 pod_ready.go:38] duration metric: took 3.315505787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:23.149034  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:53:23.149087  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:53:23.165120  433439 api_server.go:72] duration metric: took 3.648094543s to wait for apiserver process to appear ...
	I0408 12:53:23.165149  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:53:23.165170  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:53:23.171016  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:53:23.172486  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:53:23.172510  433439 api_server.go:131] duration metric: took 7.354957ms to wait for apiserver health ...
	I0408 12:53:23.172518  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:53:23.353807  433439 system_pods.go:59] 9 kube-system pods found
	I0408 12:53:23.353846  433439 system_pods.go:61] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.353853  433439 system_pods.go:61] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.353859  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.353866  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.353874  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.353879  433439 system_pods.go:61] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.353883  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.353890  433439 system_pods.go:61] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.353896  433439 system_pods.go:61] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.353911  433439 system_pods.go:74] duration metric: took 181.386053ms to wait for pod list to return data ...
	I0408 12:53:23.353923  433439 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:53:23.549663  433439 default_sa.go:45] found service account: "default"
	I0408 12:53:23.549702  433439 default_sa.go:55] duration metric: took 195.766529ms for default service account to be created ...
	I0408 12:53:23.549717  433439 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:53:23.755668  433439 system_pods.go:86] 9 kube-system pods found
	I0408 12:53:23.755729  433439 system_pods.go:89] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.755739  433439 system_pods.go:89] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.755748  433439 system_pods.go:89] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.755755  433439 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.755761  433439 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.755768  433439 system_pods.go:89] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.755774  433439 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.755787  433439 system_pods.go:89] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.755792  433439 system_pods.go:89] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.755805  433439 system_pods.go:126] duration metric: took 206.081481ms to wait for k8s-apps to be running ...
	I0408 12:53:23.755814  433439 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:53:23.755866  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:23.774910  433439 system_svc.go:56] duration metric: took 19.080727ms WaitForService to wait for kubelet
	I0408 12:53:23.774954  433439 kubeadm.go:576] duration metric: took 4.257931558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:53:23.774985  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:53:23.949588  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:53:23.949618  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:53:23.949630  433439 node_conditions.go:105] duration metric: took 174.638826ms to run NodePressure ...
	I0408 12:53:23.949642  433439 start.go:240] waiting for startup goroutines ...
	I0408 12:53:23.949649  433439 start.go:245] waiting for cluster config update ...
	I0408 12:53:23.949659  433439 start.go:254] writing updated cluster config ...
	I0408 12:53:23.949929  433439 ssh_runner.go:195] Run: rm -f paused
	I0408 12:53:24.004633  433439 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:53:24.007640  433439 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-527454" cluster and "default" namespace by default
	I0408 12:53:50.506496  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:53:50.506736  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:53:50.508871  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:50.508975  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:50.509090  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:50.509248  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:50.509435  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:50.509546  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:50.511505  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:50.511616  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:50.511727  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:50.511838  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:50.511925  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:50.512024  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:50.512112  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:50.512228  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:50.512332  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:50.512442  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:50.512551  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:50.512608  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:50.512661  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:50.512714  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:50.512784  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:50.512866  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:50.512934  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:50.513078  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:50.513228  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:50.513285  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:50.513383  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:50.515207  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:50.515297  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:50.515380  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:50.515449  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:50.515522  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:50.515668  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:50.515756  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:53:50.515843  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516036  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516118  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516346  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516428  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516675  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516747  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516990  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517092  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.517336  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517352  433881 kubeadm.go:309] 
	I0408 12:53:50.517402  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:53:50.517453  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:53:50.517463  433881 kubeadm.go:309] 
	I0408 12:53:50.517517  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:53:50.517572  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:53:50.517743  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:53:50.517757  433881 kubeadm.go:309] 
	I0408 12:53:50.517898  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:53:50.517949  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:53:50.517999  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:53:50.518014  433881 kubeadm.go:309] 
	I0408 12:53:50.518163  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:53:50.518286  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:53:50.518297  433881 kubeadm.go:309] 
	I0408 12:53:50.518448  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:53:50.518581  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:53:50.518686  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:53:50.518747  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:53:50.518781  433881 kubeadm.go:309] 
	W0408 12:53:50.518884  433881 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:53:50.518933  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:53:50.995302  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:51.011982  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:53:51.022491  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:53:51.022512  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:53:51.022565  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:53:51.032994  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:53:51.033071  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:53:51.043529  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:53:51.053500  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:53:51.053580  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:53:51.063658  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.073397  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:53:51.073464  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.085243  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:53:51.095094  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:53:51.095165  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:53:51.105549  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:53:51.185596  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:51.185706  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:51.349502  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:51.349661  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:51.349805  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:51.557584  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:51.559567  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:51.559701  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:51.559800  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:51.559968  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:51.560065  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:51.560159  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:51.560241  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:51.560337  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:51.560443  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:51.560561  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:51.560680  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:51.560735  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:51.560826  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:51.727630  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:51.895665  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:52.087304  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:52.187789  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:52.213627  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:52.213777  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:52.213837  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:52.384599  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:52.386843  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:52.386992  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:52.389989  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:52.393527  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:52.394471  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:52.405071  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:54:32.408240  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:54:32.408440  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:32.408738  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:37.409255  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:37.409493  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:47.409946  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:47.410234  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:07.410503  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:07.410710  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.409536  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:47.410032  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.410062  433881 kubeadm.go:309] 
	I0408 12:55:47.410118  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:55:47.410216  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:55:47.410232  433881 kubeadm.go:309] 
	I0408 12:55:47.410278  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:55:47.410341  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:55:47.410503  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:55:47.410515  433881 kubeadm.go:309] 
	I0408 12:55:47.410691  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:55:47.410768  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:55:47.410833  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:55:47.410843  433881 kubeadm.go:309] 
	I0408 12:55:47.411002  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:55:47.411092  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:55:47.411099  433881 kubeadm.go:309] 
	I0408 12:55:47.411208  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:55:47.411325  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:55:47.411415  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:55:47.411523  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:55:47.411534  433881 kubeadm.go:309] 
	I0408 12:55:47.413655  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:55:47.413779  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:55:47.413887  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:55:47.414099  433881 kubeadm.go:393] duration metric: took 7m58.347147979s to StartCluster
	I0408 12:55:47.414206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:55:47.414540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:55:47.466864  433881 cri.go:89] found id: ""
	I0408 12:55:47.466899  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.466909  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:55:47.466917  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:55:47.466999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:55:47.505562  433881 cri.go:89] found id: ""
	I0408 12:55:47.505590  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.505599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:55:47.505606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:55:47.505663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:55:47.545030  433881 cri.go:89] found id: ""
	I0408 12:55:47.545063  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.545075  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:55:47.545086  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:55:47.545158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:55:47.584650  433881 cri.go:89] found id: ""
	I0408 12:55:47.584685  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.584698  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:55:47.584707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:55:47.584775  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:55:47.624857  433881 cri.go:89] found id: ""
	I0408 12:55:47.624885  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.624893  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:55:47.624900  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:55:47.624953  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:55:47.662872  433881 cri.go:89] found id: ""
	I0408 12:55:47.662910  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.662922  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:55:47.662931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:55:47.662999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:55:47.702086  433881 cri.go:89] found id: ""
	I0408 12:55:47.702132  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.702142  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:55:47.702148  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:55:47.702198  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:55:47.754880  433881 cri.go:89] found id: ""
	I0408 12:55:47.754912  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.754922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:55:47.754932  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:55:47.754946  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:55:47.839768  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:55:47.839800  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:55:47.839817  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:55:47.947231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:55:47.947281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:55:47.997692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:55:47.997725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:55:48.050804  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:55:48.050854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0408 12:55:48.067168  433881 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:55:48.067218  433881 out.go:239] * 
	W0408 12:55:48.067277  433881 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.067305  433881 out.go:239] * 
	W0408 12:55:48.068281  433881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:55:48.072609  433881 out.go:177] 
	W0408 12:55:48.074039  433881 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.074112  433881 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:55:48.074174  433881 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:55:48.076570  433881 out.go:177] 
	
	
	==> CRI-O <==
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.851966901Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&PodSandboxMetadata{Name:busybox,Uid:e34c664b-3926-4ddf-98b9-7bb599eee6ca,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580454188067070,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-08T12:47:25.416070131Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ndz4x,Uid:f33b7eb7-3553-4027-ac38-f3ee62cc67d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17125804532935735
06,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-08T12:47:25.416053726Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9aed4c33fa91573f16431352503e34a32ff8a6808e637d524e088be6ac85b194,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-dbb9b,Uid:f435d865-85f3-4d32-bedf-c3bf053500fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580450493843657,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-dbb9b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f435d865-85f3-4d32-bedf-c3bf053500fe,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-08T12:47:25.4
16075771Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&PodSandboxMetadata{Name:kube-proxy-tr6td,Uid:4e97a709-efb2-4d44-8f2e-b9e9fef5fb70,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580446656583496,Labels:map[string]string{controller-revision-hash: 97c89d47,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f2e-b9e9fef5fb70,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-08T12:47:25.416064393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:64374707-2bed-4656-a07a-38e950da5333,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580446633675587,Labels:map[string]st
ring{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/c
onfig.seen: 2024-04-08T12:47:25.416068188Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-135234,Uid:6a504075cb3a2add1c3f5ae973fcfff9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580440949713441,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6a504075cb3a2add1c3f5ae973fcfff9,kubernetes.io/config.seen: 2024-04-08T12:47:20.377656856Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-135234,Uid:c5336d68b869284c41908908fd176a37,Name
space:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580440940788348,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c5336d68b869284c41908908fd176a37,kubernetes.io/config.seen: 2024-04-08T12:47:20.377655870Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-135234,Uid:df2c4c6e50402f450b51be61653856d6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580440920721603,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b5
1be61653856d6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.48:2379,kubernetes.io/config.hash: df2c4c6e50402f450b51be61653856d6,kubernetes.io/config.seen: 2024-04-08T12:47:20.450283537Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-135234,Uid:d8eba9bb52edf68b218a17bfc407e5c8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580440910372546,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.48:8443,kubernetes.io/config.hash: d8eba9bb52edf68b218a17bfc407e5c8,kubern
etes.io/config.seen: 2024-04-08T12:47:20.377651512Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=03f96400-ce4c-41cd-8efb-f3c3063d44e4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.853276755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc40fbb3-58ab-4b6a-ac12-d7b7ae9771b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.853367410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc40fbb3-58ab-4b6a-ac12-d7b7ae9771b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.853866288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580477814100319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e826602fffd409128f6a97065c7d28177660298974ea9cd3e6bfc10952e4f3d3,PodSandboxId:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712580456679534703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6b55d7cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346,PodSandboxId:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580453563241083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,},Annotations:map[string]string{io.kubernetes.container.hash: 56e8a5c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568,PodSandboxId:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712580447275067444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f
2e-b9e9fef5fb70,},Annotations:map[string]string{io.kubernetes.container.hash: 332c3d62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712580447268326827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da53
33,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f,PodSandboxId:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580441370244197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b51be61653856d6,},Annotations:map[string]string{io.kuber
netes.container.hash: 24fc493b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d,PodSandboxId:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712580441342093836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7,PodSandboxId:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712580441262115634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb,PodSandboxId:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712580441192395127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 85ce6568,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc40fbb3-58ab-4b6a-ac12-d7b7ae9771b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.890920286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c3ef1f3-7212-4d7d-a24e-512ba84964fc name=/runtime.v1.RuntimeService/Version
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.891020384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c3ef1f3-7212-4d7d-a24e-512ba84964fc name=/runtime.v1.RuntimeService/Version
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.895603960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42c306bc-c87a-47c1-97e4-d097ead76d67 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.896064356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581255896002438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42c306bc-c87a-47c1-97e4-d097ead76d67 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.896905857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ca95428-1dd7-42f0-a408-6d572708a5ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.896969770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ca95428-1dd7-42f0-a408-6d572708a5ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.897164542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580477814100319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e826602fffd409128f6a97065c7d28177660298974ea9cd3e6bfc10952e4f3d3,PodSandboxId:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712580456679534703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6b55d7cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346,PodSandboxId:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580453563241083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,},Annotations:map[string]string{io.kubernetes.container.hash: 56e8a5c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568,PodSandboxId:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712580447275067444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f
2e-b9e9fef5fb70,},Annotations:map[string]string{io.kubernetes.container.hash: 332c3d62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712580447268326827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da53
33,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f,PodSandboxId:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580441370244197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b51be61653856d6,},Annotations:map[string]string{io.kuber
netes.container.hash: 24fc493b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d,PodSandboxId:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712580441342093836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7,PodSandboxId:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712580441262115634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb,PodSandboxId:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712580441192395127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 85ce6568,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ca95428-1dd7-42f0-a408-6d572708a5ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.945070417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29c0a984-1773-494a-839b-44f19f7cbd32 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.945304428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29c0a984-1773-494a-839b-44f19f7cbd32 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.946512777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3672eae2-04bc-4460-9a72-7c3198c9cccc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.946986138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581255946962888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3672eae2-04bc-4460-9a72-7c3198c9cccc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.947449778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3edbb3cf-b233-4030-816e-b993c86d751a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.947577039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3edbb3cf-b233-4030-816e-b993c86d751a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.947800548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580477814100319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e826602fffd409128f6a97065c7d28177660298974ea9cd3e6bfc10952e4f3d3,PodSandboxId:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712580456679534703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6b55d7cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346,PodSandboxId:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580453563241083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,},Annotations:map[string]string{io.kubernetes.container.hash: 56e8a5c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568,PodSandboxId:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712580447275067444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f
2e-b9e9fef5fb70,},Annotations:map[string]string{io.kubernetes.container.hash: 332c3d62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712580447268326827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da53
33,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f,PodSandboxId:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580441370244197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b51be61653856d6,},Annotations:map[string]string{io.kuber
netes.container.hash: 24fc493b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d,PodSandboxId:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712580441342093836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7,PodSandboxId:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712580441262115634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb,PodSandboxId:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712580441192395127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 85ce6568,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3edbb3cf-b233-4030-816e-b993c86d751a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.990455462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7654ac5-f492-4cbd-a95c-8cb75e6f7eca name=/runtime.v1.RuntimeService/Version
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.990612845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7654ac5-f492-4cbd-a95c-8cb75e6f7eca name=/runtime.v1.RuntimeService/Version
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.992079409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bf2a6c4-4953-4d12-ad0b-75ed763c3cb1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.992430193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581255992403060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bf2a6c4-4953-4d12-ad0b-75ed763c3cb1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.993067709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa273867-5d44-4baf-be6e-58f5fbcda76f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.993149777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa273867-5d44-4baf-be6e-58f5fbcda76f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:00:55 no-preload-135234 crio[730]: time="2024-04-08 13:00:55.993445413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580477814100319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e826602fffd409128f6a97065c7d28177660298974ea9cd3e6bfc10952e4f3d3,PodSandboxId:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712580456679534703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6b55d7cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346,PodSandboxId:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580453563241083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,},Annotations:map[string]string{io.kubernetes.container.hash: 56e8a5c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568,PodSandboxId:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712580447275067444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f
2e-b9e9fef5fb70,},Annotations:map[string]string{io.kubernetes.container.hash: 332c3d62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712580447268326827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da53
33,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f,PodSandboxId:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580441370244197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b51be61653856d6,},Annotations:map[string]string{io.kuber
netes.container.hash: 24fc493b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d,PodSandboxId:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712580441342093836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7,PodSandboxId:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712580441262115634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb,PodSandboxId:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712580441192395127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 85ce6568,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa273867-5d44-4baf-be6e-58f5fbcda76f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a6c1545f860a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   1d0fa77fa4e84       storage-provisioner
	e826602fffd40       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   94d8b3e4518d0       busybox
	eef06839046da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   61a5b22049c08       coredns-7db6d8ff4d-ndz4x
	9afab6e492932       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652                                      13 minutes ago      Running             kube-proxy                1                   aea6c90dcd381       kube-proxy-tr6td
	78ee8679f8367       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   1d0fa77fa4e84       storage-provisioner
	31df11caa819e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   6341883da4bbb       etcd-no-preload-135234
	bb1c9d0aa3889       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5                                      13 minutes ago      Running             kube-scheduler            1                   4e5300388929c       kube-scheduler-no-preload-135234
	76a18493a630c       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a                                      13 minutes ago      Running             kube-controller-manager   1                   75e6c81293865       kube-controller-manager-no-preload-135234
	380c451b3806e       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3                                      13 minutes ago      Running             kube-apiserver            1                   d2586afd37420       kube-apiserver-no-preload-135234
	
	
	==> coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43428 - 8500 "HINFO IN 7617088099657041315.5867437557060873632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009418175s
	
	
	==> describe nodes <==
	Name:               no-preload-135234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-135234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=no-preload-135234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T12_38_57_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:38:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-135234
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 13:00:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 12:58:09 +0000   Mon, 08 Apr 2024 12:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 12:58:09 +0000   Mon, 08 Apr 2024 12:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 12:58:09 +0000   Mon, 08 Apr 2024 12:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 12:58:09 +0000   Mon, 08 Apr 2024 12:47:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.48
	  Hostname:    no-preload-135234
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 18c48622337b460382a3ee7ec0672944
	  System UUID:                18c48622-337b-4603-82a3-ee7ec0672944
	  Boot ID:                    a3cfe8ba-a14f-4e41-9b54-35c60f5a9546
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-ndz4x                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-135234                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-135234             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-135234    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-tr6td                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-135234             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-dbb9b              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-135234 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-135234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-135234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-135234 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-135234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-135234 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node no-preload-135234 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-135234 event: Registered Node no-preload-135234 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-135234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-135234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-135234 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-135234 event: Registered Node no-preload-135234 in Controller
	
	
	==> dmesg <==
	[Apr 8 12:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052896] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041171] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.563193] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.872146] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.636415] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.920656] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.056041] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070444] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.183986] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.182803] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.317385] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[Apr 8 12:47] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.068130] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.341938] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[  +6.588358] kauditd_printk_skb: 100 callbacks suppressed
	[  +4.090456] systemd-fstab-generator[2085]: Ignoring "noauto" option for root device
	[  +2.500377] kauditd_printk_skb: 72 callbacks suppressed
	[  +7.352202] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] <==
	{"level":"warn","ts":"2024-04-08T12:47:47.902819Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:47:46.492779Z","time spent":"1.40995136s","remote":"127.0.0.1:52516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":802,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" mod_revision:561 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" value_size:707 lease:2438293193884155420 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" > >"}
	{"level":"warn","ts":"2024-04-08T12:47:48.18618Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.513173ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11661665230738931628 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1844b\" mod_revision:562 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1844b\" value_size:830 lease:2438293193884155420 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1844b\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-08T12:47:48.18632Z","caller":"traceutil/trace.go:171","msg":"trace[1875167048] linearizableReadLoop","detail":"{readStateIndex:638; appliedIndex:637; }","duration":"272.294262ms","start":"2024-04-08T12:47:47.914016Z","end":"2024-04-08T12:47:48.18631Z","steps":["trace[1875167048] 'read index received'  (duration: 101.532169ms)","trace[1875167048] 'applied index is now lower than readState.Index'  (duration: 170.761148ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-08T12:47:48.186338Z","caller":"traceutil/trace.go:171","msg":"trace[574962701] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"274.073911ms","start":"2024-04-08T12:47:47.912247Z","end":"2024-04-08T12:47:48.186321Z","steps":["trace[574962701] 'process raft request'  (duration: 103.363321ms)","trace[574962701] 'compare'  (duration: 170.384599ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T12:47:48.186548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.543969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-08T12:47:48.186614Z","caller":"traceutil/trace.go:171","msg":"trace[207258449] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b; range_end:; response_count:1; response_revision:600; }","duration":"272.646041ms","start":"2024-04-08T12:47:47.913959Z","end":"2024-04-08T12:47:48.186605Z","steps":["trace[207258449] 'agreement among raft nodes before linearized reading'  (duration: 272.404723ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:47:48.550963Z","caller":"traceutil/trace.go:171","msg":"trace[487914323] linearizableReadLoop","detail":"{readStateIndex:639; appliedIndex:638; }","duration":"345.277115ms","start":"2024-04-08T12:47:48.205665Z","end":"2024-04-08T12:47:48.550942Z","steps":["trace[487914323] 'read index received'  (duration: 344.102476ms)","trace[487914323] 'applied index is now lower than readState.Index'  (duration: 1.173672ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T12:47:48.551217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.532574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-08T12:47:48.55125Z","caller":"traceutil/trace.go:171","msg":"trace[1873912533] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b; range_end:; response_count:1; response_revision:601; }","duration":"345.611692ms","start":"2024-04-08T12:47:48.20563Z","end":"2024-04-08T12:47:48.551241Z","steps":["trace[1873912533] 'agreement among raft nodes before linearized reading'  (duration: 345.386261ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:47:48.551276Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:47:48.205613Z","time spent":"345.656653ms","remote":"127.0.0.1:52610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4259,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" "}
	{"level":"info","ts":"2024-04-08T12:47:48.551537Z","caller":"traceutil/trace.go:171","msg":"trace[1947281467] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"357.294412ms","start":"2024-04-08T12:47:48.194158Z","end":"2024-04-08T12:47:48.551452Z","steps":["trace[1947281467] 'process raft request'  (duration: 355.648721ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:47:48.551624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:47:48.194144Z","time spent":"357.423025ms","remote":"127.0.0.1:52516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":763,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1e9df\" mod_revision:563 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1e9df\" value_size:668 lease:2438293193884155420 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1e9df\" > >"}
	{"level":"info","ts":"2024-04-08T12:48:11.871542Z","caller":"traceutil/trace.go:171","msg":"trace[1636821799] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"382.321211ms","start":"2024-04-08T12:48:11.489125Z","end":"2024-04-08T12:48:11.871446Z","steps":["trace[1636821799] 'process raft request'  (duration: 382.15699ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:48:11.871817Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:48:11.48904Z","time spent":"382.645424ms","remote":"127.0.0.1:52516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":802,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" mod_revision:597 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" value_size:707 lease:2438293193884155420 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" > >"}
	{"level":"info","ts":"2024-04-08T12:48:12.166762Z","caller":"traceutil/trace.go:171","msg":"trace[1668074674] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"462.45923ms","start":"2024-04-08T12:48:11.704283Z","end":"2024-04-08T12:48:12.166742Z","steps":["trace[1668074674] 'read index received'  (duration: 167.250498ms)","trace[1668074674] 'applied index is now lower than readState.Index'  (duration: 295.207869ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-08T12:48:12.167252Z","caller":"traceutil/trace.go:171","msg":"trace[763150810] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"657.422092ms","start":"2024-04-08T12:48:11.509819Z","end":"2024-04-08T12:48:12.167241Z","steps":["trace[763150810] 'process raft request'  (duration: 656.810147ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:48:12.167389Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:48:11.509801Z","time spent":"657.533808ms","remote":"127.0.0.1:52610","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4221,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" mod_revision:604 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" value_size:4155 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" > >"}
	{"level":"warn","ts":"2024-04-08T12:48:12.167679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"463.342003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-08T12:48:12.16778Z","caller":"traceutil/trace.go:171","msg":"trace[1515833610] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b; range_end:; response_count:1; response_revision:619; }","duration":"463.500669ms","start":"2024-04-08T12:48:11.704257Z","end":"2024-04-08T12:48:12.167757Z","steps":["trace[1515833610] 'agreement among raft nodes before linearized reading'  (duration: 463.321717ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:48:12.167856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:48:11.704244Z","time spent":"463.601633ms","remote":"127.0.0.1:52610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4259,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" "}
	{"level":"warn","ts":"2024-04-08T12:48:12.168024Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.452638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1844b\" ","response":"range_response_count:1 size:940"}
	{"level":"info","ts":"2024-04-08T12:48:12.16807Z","caller":"traceutil/trace.go:171","msg":"trace[1226717369] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1844b; range_end:; response_count:1; response_revision:619; }","duration":"292.519135ms","start":"2024-04-08T12:48:11.875543Z","end":"2024-04-08T12:48:12.168062Z","steps":["trace[1226717369] 'agreement among raft nodes before linearized reading'  (duration: 292.40909ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:57:23.88779Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":837}
	{"level":"info","ts":"2024-04-08T12:57:23.900188Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":837,"took":"11.93959ms","hash":589023636,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2637824,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-08T12:57:23.90027Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":589023636,"revision":837,"compact-revision":-1}
	
	
	==> kernel <==
	 13:00:56 up 14 min,  0 users,  load average: 0.04, 0.09, 0.08
	Linux no-preload-135234 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] <==
	I0408 12:55:26.484889       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:57:25.482988       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:57:25.483165       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0408 12:57:26.483320       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:57:26.483584       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 12:57:26.483629       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:57:26.486923       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:57:26.486996       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 12:57:26.487006       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:58:26.483990       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:58:26.484142       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 12:58:26.484150       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:58:26.487626       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:58:26.487707       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 12:58:26.487721       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:00:26.488039       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:00:26.488213       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:00:26.488231       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:00:26.488295       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:00:26.488367       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:00:26.489515       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] <==
	I0408 12:55:11.375328       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:55:40.874411       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:55:41.384372       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:56:10.880433       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:56:11.394259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:56:40.886771       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:56:41.405674       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:57:10.892790       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:57:11.415566       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:57:40.900415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:57:41.423183       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:58:10.907135       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:58:11.432071       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 12:58:35.496817       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="282.629µs"
	E0408 12:58:40.912986       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:58:41.440304       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 12:58:46.502139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="77.686µs"
	E0408 12:59:10.918145       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:59:11.449543       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:59:40.925374       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:59:41.462415       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:00:10.932077       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:00:11.471387       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:00:40.939209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:00:41.481541       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] <==
	I0408 12:47:27.737118       1 server_linux.go:69] "Using iptables proxy"
	I0408 12:47:27.758016       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.48"]
	I0408 12:47:27.838606       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0408 12:47:27.838774       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:47:27.838886       1 server_linux.go:165] "Using iptables Proxier"
	I0408 12:47:27.848291       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:47:27.849536       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0408 12:47:27.849696       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:47:27.853241       1 config.go:319] "Starting node config controller"
	I0408 12:47:27.853378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0408 12:47:27.861375       1 config.go:192] "Starting service config controller"
	I0408 12:47:27.861421       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0408 12:47:27.861452       1 config.go:101] "Starting endpoint slice config controller"
	I0408 12:47:27.861456       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0408 12:47:27.954441       1 shared_informer.go:320] Caches are synced for node config
	I0408 12:47:27.961649       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0408 12:47:27.961718       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] <==
	I0408 12:47:23.277776       1 serving.go:380] Generated self-signed cert in-memory
	W0408 12:47:25.375369       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0408 12:47:25.375597       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:47:25.375729       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 12:47:25.375871       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 12:47:25.460589       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.0"
	I0408 12:47:25.461069       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:47:25.464099       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0408 12:47:25.464451       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0408 12:47:25.465727       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 12:47:25.465837       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0408 12:47:25.566313       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 12:58:20 no-preload-135234 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 12:58:20 no-preload-135234 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 12:58:20 no-preload-135234 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 12:58:35 no-preload-135234 kubelet[1360]: E0408 12:58:35.482288    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 12:58:46 no-preload-135234 kubelet[1360]: E0408 12:58:46.483764    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 12:59:01 no-preload-135234 kubelet[1360]: E0408 12:59:01.481928    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 12:59:15 no-preload-135234 kubelet[1360]: E0408 12:59:15.482596    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 12:59:20 no-preload-135234 kubelet[1360]: E0408 12:59:20.501568    1360 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 08 12:59:20 no-preload-135234 kubelet[1360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 12:59:20 no-preload-135234 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 12:59:20 no-preload-135234 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 12:59:20 no-preload-135234 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 12:59:27 no-preload-135234 kubelet[1360]: E0408 12:59:27.482239    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 12:59:40 no-preload-135234 kubelet[1360]: E0408 12:59:40.484691    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 12:59:51 no-preload-135234 kubelet[1360]: E0408 12:59:51.482056    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:00:06 no-preload-135234 kubelet[1360]: E0408 13:00:06.483434    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:00:20 no-preload-135234 kubelet[1360]: E0408 13:00:20.483823    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:00:20 no-preload-135234 kubelet[1360]: E0408 13:00:20.503584    1360 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 08 13:00:20 no-preload-135234 kubelet[1360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:00:20 no-preload-135234 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:00:20 no-preload-135234 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:00:20 no-preload-135234 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:00:31 no-preload-135234 kubelet[1360]: E0408 13:00:31.481331    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:00:42 no-preload-135234 kubelet[1360]: E0408 13:00:42.481389    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:00:53 no-preload-135234 kubelet[1360]: E0408 13:00:53.481930    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	
	
	==> storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] <==
	I0408 12:47:27.705107       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0408 12:47:57.708981       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] <==
	I0408 12:47:57.914155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 12:47:57.928204       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 12:47:57.928286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 12:48:15.342323       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 12:48:15.342667       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-135234_4aceb192-f162-4ff9-bd07-33e095ec5491!
	I0408 12:48:15.344435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e66177d-ed01-459b-b2fc-842fa98cd685", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-135234_4aceb192-f162-4ff9-bd07-33e095ec5491 became leader
	I0408 12:48:15.465679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-135234_4aceb192-f162-4ff9-bd07-33e095ec5491!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-135234 -n no-preload-135234
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-135234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-dbb9b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-135234 describe pod metrics-server-569cc877fc-dbb9b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-135234 describe pod metrics-server-569cc877fc-dbb9b: exit status 1 (70.518442ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-dbb9b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-135234 describe pod metrics-server-569cc877fc-dbb9b: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0408 12:53:06.832514  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 12:53:22.066757  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-488947 -n embed-certs-488947
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-08 13:01:48.948131523 +0000 UTC m=+6086.734813731
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-488947 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-488947 logs -n 25: (2.279057028s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo cat                                               |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo containerd config dump                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl status crio                             |                              |         |                |                     |                     |
	|         | --all --full --no-pager                                |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl cat crio                                |                              |         |                |                     |                     |
	|         | --no-pager                                             |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |                |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |                |                     |                     |
	|         | \;                                                     |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo crio config                                       |                              |         |                |                     |                     |
	| delete  | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-122490 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | disable-driver-mounts-122490                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:42:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:42:31.610028  433881 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:42:31.610291  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610300  433881 out.go:304] Setting ErrFile to fd 2...
	I0408 12:42:31.610304  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610590  433881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:42:31.611834  433881 out.go:298] Setting JSON to false
	I0408 12:42:31.613323  433881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8695,"bootTime":1712571457,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:42:31.613413  433881 start.go:139] virtualization: kvm guest
	I0408 12:42:31.615441  433881 out.go:177] * [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:42:31.617429  433881 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:42:31.617459  433881 notify.go:220] Checking for updates...
	I0408 12:42:31.618918  433881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:42:31.620434  433881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:42:31.621883  433881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:42:31.623381  433881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:42:31.624858  433881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:42:31.626731  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:42:31.627141  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.627193  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.642980  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0408 12:42:31.643395  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.644144  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.644166  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.644557  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.644768  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.646980  433881 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 12:42:31.648378  433881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:42:31.648694  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.648732  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.663924  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0408 12:42:31.664361  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.664884  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.664910  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.665218  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.665445  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.701652  433881 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:42:31.703025  433881 start.go:297] selected driver: kvm2
	I0408 12:42:31.703041  433881 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.703192  433881 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:42:31.703924  433881 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.704018  433881 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:42:31.719599  433881 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:42:31.720001  433881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:42:31.720084  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:42:31.720102  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:42:31.720156  433881 start.go:340] cluster config:
	{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.720330  433881 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.722299  433881 out.go:177] * Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	I0408 12:42:31.723540  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:42:31.723577  433881 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:42:31.723594  433881 cache.go:56] Caching tarball of preloaded images
	I0408 12:42:31.723718  433881 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:42:31.723733  433881 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:42:31.723846  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:42:31.724039  433881 start.go:360] acquireMachinesLock for old-k8s-version-384148: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:42:32.207974  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:38.288048  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:41.359947  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:47.439972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:50.512009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:56.591982  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:59.664002  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:05.744032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:08.816017  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:14.895990  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:17.967942  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:24.048010  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:27.119964  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:33.200067  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:36.272037  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:42.351972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:45.424082  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:51.503992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:54.576088  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:00.656001  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:03.728079  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:09.807949  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:12.880051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:18.960024  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:22.032036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:28.112053  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:31.183992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:37.264032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:40.336026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:46.416019  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:49.487998  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:55.568026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:58.640044  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:04.719978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:07.792028  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:13.871997  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:16.944057  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:23.023969  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:26.096051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:32.176049  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:35.247929  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:41.328036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:44.399954  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:50.480046  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:53.552034  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:59.632009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:02.704063  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:08.784031  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:11.856098  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:17.936013  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:21.007970  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:27.087978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:30.159984  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:36.240042  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:39.245220  433557 start.go:364] duration metric: took 4m33.298555643s to acquireMachinesLock for "no-preload-135234"
	I0408 12:46:39.245298  433557 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:39.245311  433557 fix.go:54] fixHost starting: 
	I0408 12:46:39.245782  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:39.245821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:39.261035  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
	I0408 12:46:39.261632  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:39.262208  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:46:39.262234  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:39.262592  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:39.262819  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:39.262938  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:46:39.264995  433557 fix.go:112] recreateIfNeeded on no-preload-135234: state=Stopped err=<nil>
	I0408 12:46:39.265029  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	W0408 12:46:39.265203  433557 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:39.266971  433557 out.go:177] * Restarting existing kvm2 VM for "no-preload-135234" ...
	I0408 12:46:39.268140  433557 main.go:141] libmachine: (no-preload-135234) Calling .Start
	I0408 12:46:39.268315  433557 main.go:141] libmachine: (no-preload-135234) Ensuring networks are active...
	I0408 12:46:39.269323  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network default is active
	I0408 12:46:39.269669  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network mk-no-preload-135234 is active
	I0408 12:46:39.270047  433557 main.go:141] libmachine: (no-preload-135234) Getting domain xml...
	I0408 12:46:39.270763  433557 main.go:141] libmachine: (no-preload-135234) Creating domain...
	I0408 12:46:40.496145  433557 main.go:141] libmachine: (no-preload-135234) Waiting to get IP...
	I0408 12:46:40.497357  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.497870  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.497950  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.497853  434768 retry.go:31] will retry after 305.764185ms: waiting for machine to come up
	I0408 12:46:40.805894  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.806351  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.806380  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.806304  434768 retry.go:31] will retry after 359.02584ms: waiting for machine to come up
	I0408 12:46:39.242442  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:39.242498  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.242871  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:46:39.242935  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.243206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:46:39.245063  433439 machine.go:97] duration metric: took 4m37.367683512s to provisionDockerMachine
	I0408 12:46:39.245112  433439 fix.go:56] duration metric: took 4m37.391017413s for fixHost
	I0408 12:46:39.245118  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 4m37.391040241s
	W0408 12:46:39.245140  433439 start.go:713] error starting host: provision: host is not running
	W0408 12:46:39.245388  433439 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0408 12:46:39.245401  433439 start.go:728] Will try again in 5 seconds ...
	I0408 12:46:41.167272  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.167748  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.167779  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.167702  434768 retry.go:31] will retry after 412.762727ms: waiting for machine to come up
	I0408 12:46:41.582454  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.582959  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.582990  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.582904  434768 retry.go:31] will retry after 572.486121ms: waiting for machine to come up
	I0408 12:46:42.156830  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.157270  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.157294  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.157243  434768 retry.go:31] will retry after 706.130574ms: waiting for machine to come up
	I0408 12:46:42.865325  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.865829  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.865863  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.865762  434768 retry.go:31] will retry after 901.114252ms: waiting for machine to come up
	I0408 12:46:43.768578  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:43.769067  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:43.769103  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:43.769032  434768 retry.go:31] will retry after 1.160836088s: waiting for machine to come up
	I0408 12:46:44.931002  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:44.931408  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:44.931438  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:44.931349  434768 retry.go:31] will retry after 998.940623ms: waiting for machine to come up
	I0408 12:46:44.247774  433439 start.go:360] acquireMachinesLock for default-k8s-diff-port-527454: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:46:45.931728  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:45.932157  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:45.932241  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:45.932115  434768 retry.go:31] will retry after 1.43975568s: waiting for machine to come up
	I0408 12:46:47.373294  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:47.373786  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:47.373821  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:47.373733  434768 retry.go:31] will retry after 1.828434336s: waiting for machine to come up
	I0408 12:46:49.205019  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:49.205414  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:49.205462  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:49.205376  434768 retry.go:31] will retry after 2.847051956s: waiting for machine to come up
	I0408 12:46:52.055004  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:52.055561  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:52.055586  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:52.055517  434768 retry.go:31] will retry after 2.941262871s: waiting for machine to come up
	I0408 12:46:54.998158  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:54.998598  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:54.998631  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:54.998542  434768 retry.go:31] will retry after 3.082026915s: waiting for machine to come up
	I0408 12:46:59.561049  433674 start.go:364] duration metric: took 4m43.922045129s to acquireMachinesLock for "embed-certs-488947"
	I0408 12:46:59.561130  433674 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:59.561140  433674 fix.go:54] fixHost starting: 
	I0408 12:46:59.561636  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:59.561683  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:59.578117  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I0408 12:46:59.578573  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:59.579047  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:46:59.579074  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:59.579432  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:59.579633  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:46:59.579852  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:46:59.581445  433674 fix.go:112] recreateIfNeeded on embed-certs-488947: state=Stopped err=<nil>
	I0408 12:46:59.581492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	W0408 12:46:59.581667  433674 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:59.584306  433674 out.go:177] * Restarting existing kvm2 VM for "embed-certs-488947" ...
	I0408 12:46:59.585750  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Start
	I0408 12:46:59.585971  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring networks are active...
	I0408 12:46:59.586749  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network default is active
	I0408 12:46:59.587136  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network mk-embed-certs-488947 is active
	I0408 12:46:59.587551  433674 main.go:141] libmachine: (embed-certs-488947) Getting domain xml...
	I0408 12:46:59.588302  433674 main.go:141] libmachine: (embed-certs-488947) Creating domain...
	I0408 12:46:58.084025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084608  433557 main.go:141] libmachine: (no-preload-135234) Found IP for machine: 192.168.61.48
	I0408 12:46:58.084660  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has current primary IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084668  433557 main.go:141] libmachine: (no-preload-135234) Reserving static IP address...
	I0408 12:46:58.085160  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.085198  433557 main.go:141] libmachine: (no-preload-135234) Reserved static IP address: 192.168.61.48
	I0408 12:46:58.085213  433557 main.go:141] libmachine: (no-preload-135234) DBG | skip adding static IP to network mk-no-preload-135234 - found existing host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"}
	I0408 12:46:58.085229  433557 main.go:141] libmachine: (no-preload-135234) DBG | Getting to WaitForSSH function...
	I0408 12:46:58.085240  433557 main.go:141] libmachine: (no-preload-135234) Waiting for SSH to be available...
	I0408 12:46:58.087595  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.087990  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.088025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.088155  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH client type: external
	I0408 12:46:58.088178  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa (-rw-------)
	I0408 12:46:58.088210  433557 main.go:141] libmachine: (no-preload-135234) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:46:58.088228  433557 main.go:141] libmachine: (no-preload-135234) DBG | About to run SSH command:
	I0408 12:46:58.088241  433557 main.go:141] libmachine: (no-preload-135234) DBG | exit 0
	I0408 12:46:58.220043  433557 main.go:141] libmachine: (no-preload-135234) DBG | SSH cmd err, output: <nil>: 
	I0408 12:46:58.220440  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetConfigRaw
	I0408 12:46:58.221216  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.223881  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224184  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.224202  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224597  433557 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/config.json ...
	I0408 12:46:58.224804  433557 machine.go:94] provisionDockerMachine start ...
	I0408 12:46:58.224828  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:58.225070  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.227668  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228048  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.228080  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228242  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.228438  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228647  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228780  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.228941  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.229238  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.229253  433557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:46:58.344562  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:46:58.344602  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.344888  433557 buildroot.go:166] provisioning hostname "no-preload-135234"
	I0408 12:46:58.344922  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.345147  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.347895  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348278  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.348311  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348433  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.348638  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348801  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348911  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.349077  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.349289  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.349303  433557 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-135234 && echo "no-preload-135234" | sudo tee /etc/hostname
	I0408 12:46:58.478959  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-135234
	
	I0408 12:46:58.478996  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.481692  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482164  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.482187  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482410  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.482643  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.482851  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.483032  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.483230  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.483446  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.483465  433557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-135234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-135234/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-135234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:46:58.606022  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:58.606059  433557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:46:58.606080  433557 buildroot.go:174] setting up certificates
	I0408 12:46:58.606092  433557 provision.go:84] configureAuth start
	I0408 12:46:58.606108  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.606465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.609605  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610046  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.610079  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.612452  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612756  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.612784  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612905  433557 provision.go:143] copyHostCerts
	I0408 12:46:58.612974  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:46:58.613029  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:46:58.613097  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:46:58.613200  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:46:58.613209  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:46:58.613232  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:46:58.613295  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:46:58.613302  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:46:58.613323  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:46:58.613438  433557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.no-preload-135234 san=[127.0.0.1 192.168.61.48 localhost minikube no-preload-135234]
	I0408 12:46:58.832264  433557 provision.go:177] copyRemoteCerts
	I0408 12:46:58.832335  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:46:58.832382  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.835259  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835609  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.835650  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835883  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.836158  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.836332  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.836468  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:58.922968  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:46:58.949601  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 12:46:58.976832  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:46:59.004643  433557 provision.go:87] duration metric: took 398.533019ms to configureAuth
	I0408 12:46:59.004683  433557 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:46:59.004885  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:46:59.004988  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.008264  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008735  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.008783  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008987  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.009238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009416  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009542  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.009680  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.009866  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.009884  433557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:46:59.299880  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:46:59.299912  433557 machine.go:97] duration metric: took 1.075094362s to provisionDockerMachine
	I0408 12:46:59.299925  433557 start.go:293] postStartSetup for "no-preload-135234" (driver="kvm2")
	I0408 12:46:59.299940  433557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:46:59.299981  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.300373  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:46:59.300406  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.303274  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303769  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.303806  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303941  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.304222  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.304575  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.304874  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.395808  433557 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:46:59.400795  433557 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:46:59.400831  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:46:59.400914  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:46:59.401021  433557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:46:59.401162  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:46:59.411883  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:46:59.438486  433557 start.go:296] duration metric: took 138.54299ms for postStartSetup
	I0408 12:46:59.438546  433557 fix.go:56] duration metric: took 20.19323532s for fixHost
	I0408 12:46:59.438577  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.441875  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442334  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.442366  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442528  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.442753  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.442969  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.443101  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.443232  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.443414  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.443424  433557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:46:59.560853  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580419.531854515
	
	I0408 12:46:59.560881  433557 fix.go:216] guest clock: 1712580419.531854515
	I0408 12:46:59.560891  433557 fix.go:229] Guest: 2024-04-08 12:46:59.531854515 +0000 UTC Remote: 2024-04-08 12:46:59.438552641 +0000 UTC m=+293.653384531 (delta=93.301874ms)
	I0408 12:46:59.560918  433557 fix.go:200] guest clock delta is within tolerance: 93.301874ms
	I0408 12:46:59.560929  433557 start.go:83] releasing machines lock for "no-preload-135234", held for 20.315655744s
	I0408 12:46:59.560965  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.561244  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:59.564248  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564623  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.564658  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564758  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565245  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565434  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565524  433557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:46:59.565571  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.565726  433557 ssh_runner.go:195] Run: cat /version.json
	I0408 12:46:59.565752  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.568339  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568729  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568766  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.568789  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568931  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569139  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569201  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.569227  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.569300  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569392  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569486  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569647  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.569782  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569900  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.689264  433557 ssh_runner.go:195] Run: systemctl --version
	I0408 12:46:59.695704  433557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:46:59.848323  433557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:46:59.856068  433557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:46:59.856171  433557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:46:59.877460  433557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:46:59.877490  433557 start.go:494] detecting cgroup driver to use...
	I0408 12:46:59.877557  433557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:46:59.895329  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:46:59.910849  433557 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:46:59.910908  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:46:59.925541  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:46:59.941511  433557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:00.064454  433557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:00.218535  433557 docker.go:233] disabling docker service ...
	I0408 12:47:00.218614  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:00.234510  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:00.249703  433557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:00.403556  433557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:00.569324  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:00.585058  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:00.607536  433557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:00.607592  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.624701  433557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:00.624774  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.637414  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.649846  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.662725  433557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:00.675738  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.688667  433557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.710326  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.722619  433557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:00.734130  433557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:00.734227  433557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:00.749998  433557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:00.761556  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:00.881544  433557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:01.036952  433557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:01.037040  433557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:01.042260  433557 start.go:562] Will wait 60s for crictl version
	I0408 12:47:01.042329  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.046327  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:01.092359  433557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:01.092465  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.127373  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.165027  433557 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0408 12:47:00.888196  433674 main.go:141] libmachine: (embed-certs-488947) Waiting to get IP...
	I0408 12:47:00.889196  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:00.889766  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:00.889808  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:00.889702  434916 retry.go:31] will retry after 239.282192ms: waiting for machine to come up
	I0408 12:47:01.130508  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.131075  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.131111  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.131016  434916 retry.go:31] will retry after 388.837258ms: waiting for machine to come up
	I0408 12:47:01.522006  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.522413  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.522444  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.522364  434916 retry.go:31] will retry after 372.310428ms: waiting for machine to come up
	I0408 12:47:01.896325  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.896919  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.896954  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.896851  434916 retry.go:31] will retry after 574.930775ms: waiting for machine to come up
	I0408 12:47:02.474045  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.474626  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.474664  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.474557  434916 retry.go:31] will retry after 506.414729ms: waiting for machine to come up
	I0408 12:47:02.982589  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.983203  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.983238  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.983135  434916 retry.go:31] will retry after 614.351996ms: waiting for machine to come up
	I0408 12:47:03.599165  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:03.599682  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:03.599724  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:03.599640  434916 retry.go:31] will retry after 1.130025801s: waiting for machine to come up
	I0408 12:47:04.731350  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:04.731841  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:04.731874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:04.731791  434916 retry.go:31] will retry after 1.346613974s: waiting for machine to come up
	I0408 12:47:01.166849  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:47:01.169772  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170183  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:01.170211  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170523  433557 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:01.175336  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:01.193759  433557 kubeadm.go:877] updating cluster {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:01.193949  433557 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 12:47:01.194017  433557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:01.234439  433557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0408 12:47:01.234466  433557 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:01.234547  433557 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.234575  433557 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.234589  433557 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.234625  433557 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0408 12:47:01.234576  433557 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.234562  433557 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.234696  433557 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.234554  433557 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.236654  433557 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.236678  433557 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.236701  433557 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0408 12:47:01.236686  433557 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.236630  433557 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236789  433557 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.475737  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.476344  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.482596  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.486680  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.490012  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.496685  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0408 12:47:01.510269  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.597119  433557 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0408 12:47:01.597179  433557 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.597238  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696018  433557 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0408 12:47:01.696123  433557 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.696148  433557 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0408 12:47:01.696196  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696201  433557 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.696237  433557 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0408 12:47:01.696254  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696265  433557 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.696299  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.710260  433557 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0408 12:47:01.710317  433557 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.710369  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799524  433557 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0408 12:47:01.799583  433557 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.799592  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.799616  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.799626  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.799618  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799679  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.799734  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.916654  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0408 12:47:01.916701  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.916783  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:01.916809  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.923863  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.923904  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.923974  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.924021  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.924065  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924176  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924067  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.926651  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0408 12:47:01.926681  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926722  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926783  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0408 12:47:01.974801  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0408 12:47:01.974875  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974939  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:01.974969  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974944  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0408 12:47:02.062944  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.916991  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.990237597s)
	I0408 12:47:04.917016  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.942055075s)
	I0408 12:47:04.917036  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0408 12:47:04.917040  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0408 12:47:04.917047  433557 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917098  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917117  433557 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.854126587s)
	I0408 12:47:04.917187  433557 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0408 12:47:04.917233  433557 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.917278  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:06.080429  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:06.080910  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:06.080942  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:06.080866  434916 retry.go:31] will retry after 1.125692215s: waiting for machine to come up
	I0408 12:47:07.208553  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:07.209015  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:07.209040  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:07.208961  434916 retry.go:31] will retry after 1.958080491s: waiting for machine to come up
	I0408 12:47:09.169878  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:09.170289  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:09.170319  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:09.170243  434916 retry.go:31] will retry after 2.241966019s: waiting for machine to come up
	I0408 12:47:08.833969  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.916836964s)
	I0408 12:47:08.834011  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0408 12:47:08.834029  433557 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834032  433557 ssh_runner.go:235] Completed: which crictl: (3.916731005s)
	I0408 12:47:08.834085  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834101  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:11.414435  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:11.414829  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:11.414851  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:11.414786  434916 retry.go:31] will retry after 2.815941766s: waiting for machine to come up
	I0408 12:47:14.233868  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:14.234272  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:14.234318  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:14.234228  434916 retry.go:31] will retry after 3.213192238s: waiting for machine to come up
	I0408 12:47:10.925471  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091353526s)
	I0408 12:47:10.925519  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0408 12:47:10.925542  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925581  433557 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.091434251s)
	I0408 12:47:10.925612  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925673  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 12:47:10.925782  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:12.405175  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.479529413s)
	I0408 12:47:12.405221  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0408 12:47:12.405238  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:12.405236  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.479424271s)
	I0408 12:47:12.405270  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0408 12:47:12.405296  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:14.283021  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (1.877693108s)
	I0408 12:47:14.283061  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0408 12:47:14.283079  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:14.283143  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:18.781552  433881 start.go:364] duration metric: took 4m47.057472647s to acquireMachinesLock for "old-k8s-version-384148"
	I0408 12:47:18.781636  433881 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:18.781645  433881 fix.go:54] fixHost starting: 
	I0408 12:47:18.782123  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:18.782168  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:18.804263  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0408 12:47:18.804759  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:18.805376  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:47:18.805407  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:18.805815  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:18.806091  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:18.806265  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetState
	I0408 12:47:18.809884  433881 fix.go:112] recreateIfNeeded on old-k8s-version-384148: state=Stopped err=<nil>
	I0408 12:47:18.809915  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	W0408 12:47:18.810103  433881 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:18.812906  433881 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-384148" ...
	I0408 12:47:17.451190  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451657  433674 main.go:141] libmachine: (embed-certs-488947) Found IP for machine: 192.168.72.159
	I0408 12:47:17.451705  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has current primary IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451725  433674 main.go:141] libmachine: (embed-certs-488947) Reserving static IP address...
	I0408 12:47:17.452192  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.452239  433674 main.go:141] libmachine: (embed-certs-488947) Reserved static IP address: 192.168.72.159
	I0408 12:47:17.452259  433674 main.go:141] libmachine: (embed-certs-488947) DBG | skip adding static IP to network mk-embed-certs-488947 - found existing host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"}
	I0408 12:47:17.452282  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Getting to WaitForSSH function...
	I0408 12:47:17.452297  433674 main.go:141] libmachine: (embed-certs-488947) Waiting for SSH to be available...
	I0408 12:47:17.454780  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455169  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.455208  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH client type: external
	I0408 12:47:17.455354  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa (-rw-------)
	I0408 12:47:17.455384  433674 main.go:141] libmachine: (embed-certs-488947) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:17.455401  433674 main.go:141] libmachine: (embed-certs-488947) DBG | About to run SSH command:
	I0408 12:47:17.455414  433674 main.go:141] libmachine: (embed-certs-488947) DBG | exit 0
	I0408 12:47:17.585037  433674 main.go:141] libmachine: (embed-certs-488947) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:17.585443  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetConfigRaw
	I0408 12:47:17.586184  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.589492  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.589953  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.589985  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.590269  433674 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/config.json ...
	I0408 12:47:17.590518  433674 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:17.590550  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:17.590798  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.593968  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594570  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.594615  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594832  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.595073  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595236  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595442  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.595661  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.595892  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.595905  433674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:17.708468  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:17.708504  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.708857  433674 buildroot.go:166] provisioning hostname "embed-certs-488947"
	I0408 12:47:17.708890  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.709083  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.712242  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712698  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.712732  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712928  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.713122  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713298  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713433  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.713612  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.713801  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.713817  433674 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-488947 && echo "embed-certs-488947" | sudo tee /etc/hostname
	I0408 12:47:17.842964  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-488947
	
	I0408 12:47:17.843017  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.846436  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.846959  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.846992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.847225  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.847486  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847726  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847945  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.848182  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.848373  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.848397  433674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-488947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-488947/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-488947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:17.975087  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:17.975123  433674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:17.975178  433674 buildroot.go:174] setting up certificates
	I0408 12:47:17.975198  433674 provision.go:84] configureAuth start
	I0408 12:47:17.975212  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.975606  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.979028  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979483  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.979510  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979754  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.982474  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.982944  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.982977  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.983174  433674 provision.go:143] copyHostCerts
	I0408 12:47:17.983230  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:17.983240  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:17.983291  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:17.983408  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:17.983419  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:17.983444  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:17.983500  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:17.983507  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:17.983526  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:17.983580  433674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.embed-certs-488947 san=[127.0.0.1 192.168.72.159 embed-certs-488947 localhost minikube]
	I0408 12:47:18.043022  433674 provision.go:177] copyRemoteCerts
	I0408 12:47:18.043092  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:18.043162  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.046335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046722  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.046757  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046904  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.047145  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.047333  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.047475  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.134761  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:18.163745  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 12:47:18.192946  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:18.220790  433674 provision.go:87] duration metric: took 245.573885ms to configureAuth
	I0408 12:47:18.220827  433674 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:18.221067  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:47:18.221175  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.224177  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.224805  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.224839  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.225098  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.225363  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225569  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225797  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.226024  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.226202  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.226219  433674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:18.522682  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:18.522718  433674 machine.go:97] duration metric: took 932.18024ms to provisionDockerMachine
	I0408 12:47:18.522735  433674 start.go:293] postStartSetup for "embed-certs-488947" (driver="kvm2")
	I0408 12:47:18.522750  433674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:18.522776  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.523133  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:18.523174  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.526523  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.526872  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.526903  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.527101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.527336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.527512  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.527692  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.615353  433674 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:18.620414  433674 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:18.620447  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:18.620525  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:18.620627  433674 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:18.620726  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:18.630585  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:18.658952  433674 start.go:296] duration metric: took 136.200863ms for postStartSetup
	I0408 12:47:18.659004  433674 fix.go:56] duration metric: took 19.097863992s for fixHost
	I0408 12:47:18.659037  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.662115  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662571  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.662606  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662843  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.663100  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663308  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663480  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.663676  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.663919  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.663939  433674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:18.781355  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580438.730334929
	
	I0408 12:47:18.781402  433674 fix.go:216] guest clock: 1712580438.730334929
	I0408 12:47:18.781427  433674 fix.go:229] Guest: 2024-04-08 12:47:18.730334929 +0000 UTC Remote: 2024-04-08 12:47:18.659010209 +0000 UTC m=+303.178294166 (delta=71.32472ms)
	I0408 12:47:18.781457  433674 fix.go:200] guest clock delta is within tolerance: 71.32472ms
	I0408 12:47:18.781465  433674 start.go:83] releasing machines lock for "embed-certs-488947", held for 19.22036189s
	I0408 12:47:18.781502  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.781800  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:18.784825  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785270  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.785313  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786104  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786456  433674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:18.786501  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.786626  433674 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:18.786660  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.789409  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789704  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790019  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790149  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790306  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790322  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790338  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790495  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790528  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790745  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.790867  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790997  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.911025  433674 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:18.917785  433674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:19.070383  433674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:19.077521  433674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:19.077606  433674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:19.094598  433674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:19.094636  433674 start.go:494] detecting cgroup driver to use...
	I0408 12:47:19.094750  433674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:19.111163  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:19.125621  433674 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:19.125688  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:19.141948  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:19.156671  433674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:19.281688  433674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:19.455445  433674 docker.go:233] disabling docker service ...
	I0408 12:47:19.455519  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:19.474594  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:19.491301  433674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:19.646063  433674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:19.786075  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:19.803535  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:19.829204  433674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:19.829282  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.842132  433674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:19.842201  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.853915  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.866449  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.879235  433674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:19.899411  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.920363  433674 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.946414  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.958824  433674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:19.969691  433674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:19.969754  433674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:19.986458  433674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:19.998655  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:20.157494  433674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:20.318209  433674 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:20.318287  433674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:20.325414  433674 start.go:562] Will wait 60s for crictl version
	I0408 12:47:20.325490  433674 ssh_runner.go:195] Run: which crictl
	I0408 12:47:20.330070  433674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:20.383808  433674 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:20.383959  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.417705  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.454321  433674 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:47:20.456101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:20.460035  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.460734  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:20.460774  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.461140  433674 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:20.467650  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:20.486936  433674 kubeadm.go:877] updating cluster {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:20.487105  433674 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:47:20.487176  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:20.529152  433674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:47:20.529293  433674 ssh_runner.go:195] Run: which lz4
	I0408 12:47:16.552712  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.26954566s)
	I0408 12:47:16.552781  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0408 12:47:16.552797  433557 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:16.552839  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:17.512103  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 12:47:17.512151  433557 cache_images.go:123] Successfully loaded all cached images
	I0408 12:47:17.512158  433557 cache_images.go:92] duration metric: took 16.277680364s to LoadCachedImages
	I0408 12:47:17.512171  433557 kubeadm.go:928] updating node { 192.168.61.48 8443 v1.30.0-rc.0 crio true true} ...
	I0408 12:47:17.512324  433557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-135234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:17.512440  433557 ssh_runner.go:195] Run: crio config
	I0408 12:47:17.561382  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:17.561424  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:17.561441  433557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:17.561472  433557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-135234 NodeName:no-preload-135234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:17.561681  433557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-135234"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:17.561807  433557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0408 12:47:17.574237  433557 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:17.574321  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:17.587129  433557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0408 12:47:17.609022  433557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0408 12:47:17.629656  433557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0408 12:47:17.650373  433557 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:17.655031  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:17.670872  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:17.811548  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:17.830945  433557 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234 for IP: 192.168.61.48
	I0408 12:47:17.830974  433557 certs.go:194] generating shared ca certs ...
	I0408 12:47:17.831000  433557 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:17.831219  433557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:17.831277  433557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:17.831290  433557 certs.go:256] generating profile certs ...
	I0408 12:47:17.831453  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/client.key
	I0408 12:47:17.831521  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key.dbd08c09
	I0408 12:47:17.831577  433557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key
	I0408 12:47:17.831823  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:17.831891  433557 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:17.831906  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:17.831946  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:17.831978  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:17.832007  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:17.832059  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:17.832899  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:17.869894  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:17.902893  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:17.943547  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:17.990462  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 12:47:18.026697  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:18.055643  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:18.083357  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:47:18.109247  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:18.134513  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:18.161811  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:18.189968  433557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:18.210173  433557 ssh_runner.go:195] Run: openssl version
	I0408 12:47:18.216813  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:18.230693  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236461  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236526  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.244183  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:18.257589  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:18.271235  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277004  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277088  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.283549  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:18.296789  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:18.309587  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314537  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314608  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.320942  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:18.333407  433557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:18.338637  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:18.345365  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:18.352262  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:18.359464  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:18.366233  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:18.373280  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:18.380134  433557 kubeadm.go:391] StartCluster: {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:18.380291  433557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:18.380403  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.423068  433557 cri.go:89] found id: ""
	I0408 12:47:18.423164  433557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:18.435458  433557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:18.435497  433557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:18.435503  433557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:18.435562  433557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:18.447509  433557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:18.448720  433557 kubeconfig.go:125] found "no-preload-135234" server: "https://192.168.61.48:8443"
	I0408 12:47:18.451154  433557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:18.463246  433557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.48
	I0408 12:47:18.463299  433557 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:18.463315  433557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:18.463394  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.522929  433557 cri.go:89] found id: ""
	I0408 12:47:18.523011  433557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:18.546346  433557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:18.558613  433557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:18.558640  433557 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:18.558714  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:18.570020  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:18.570106  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:18.581323  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:18.593718  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:18.593778  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:18.606889  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.619251  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:18.619320  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.632343  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:18.644913  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:18.645004  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:18.656965  433557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:18.670774  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:18.785507  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:19.988135  433557 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.202584017s)
	I0408 12:47:19.988174  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.235430  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.316709  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.456307  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:20.456393  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:18.814842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .Start
	I0408 12:47:18.815096  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring networks are active...
	I0408 12:47:18.816155  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network default is active
	I0408 12:47:18.816608  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network mk-old-k8s-version-384148 is active
	I0408 12:47:18.817061  433881 main.go:141] libmachine: (old-k8s-version-384148) Getting domain xml...
	I0408 12:47:18.817951  433881 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:47:20.144750  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting to get IP...
	I0408 12:47:20.145850  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.146334  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.146403  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.146320  435057 retry.go:31] will retry after 230.92081ms: waiting for machine to come up
	I0408 12:47:20.378905  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.379518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.379572  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.379474  435057 retry.go:31] will retry after 383.208004ms: waiting for machine to come up
	I0408 12:47:20.764287  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.764883  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.764936  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.764858  435057 retry.go:31] will retry after 430.674899ms: waiting for machine to come up
	I0408 12:47:21.197738  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.198231  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.198255  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.198190  435057 retry.go:31] will retry after 553.905508ms: waiting for machine to come up
	I0408 12:47:20.534154  433674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:20.538991  433674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:20.539034  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:47:22.249270  433674 crio.go:462] duration metric: took 1.715182486s to copy over tarball
	I0408 12:47:22.249391  433674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:24.966695  433674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.717265287s)
	I0408 12:47:24.966730  433674 crio.go:469] duration metric: took 2.717416948s to extract the tarball
	I0408 12:47:24.966740  433674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:25.007656  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:25.063445  433674 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:47:25.063482  433674 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:47:25.063494  433674 kubeadm.go:928] updating node { 192.168.72.159 8443 v1.29.3 crio true true} ...
	I0408 12:47:25.063627  433674 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-488947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:25.063745  433674 ssh_runner.go:195] Run: crio config
	I0408 12:47:25.122219  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:25.122282  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:25.122298  433674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:25.122330  433674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-488947 NodeName:embed-certs-488947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:25.122556  433674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-488947"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:25.122633  433674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:47:25.137001  433674 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:25.137148  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:25.151168  433674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0408 12:47:25.171698  433674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:25.195101  433674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0408 12:47:25.216873  433674 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:25.221155  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:25.235740  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:25.354135  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:25.377763  433674 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947 for IP: 192.168.72.159
	I0408 12:47:25.377801  433674 certs.go:194] generating shared ca certs ...
	I0408 12:47:25.377824  433674 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:25.378055  433674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:25.378137  433674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:25.378161  433674 certs.go:256] generating profile certs ...
	I0408 12:47:25.378299  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/client.key
	I0408 12:47:25.378391  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key.21d2a89c
	I0408 12:47:25.378460  433674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key
	I0408 12:47:25.378628  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:25.378687  433674 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:25.378702  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:25.378736  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:25.378780  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:25.378818  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:25.378888  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:25.379800  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:25.422370  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:25.468967  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:25.516750  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:20.956916  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.456948  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.957498  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.982763  433557 api_server.go:72] duration metric: took 1.526450888s to wait for apiserver process to appear ...
	I0408 12:47:21.982797  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:21.982852  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.363696  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.363732  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.363758  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.398003  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.398065  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.483280  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:21.754065  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.754814  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.754849  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.754719  435057 retry.go:31] will retry after 678.896106ms: waiting for machine to come up
	I0408 12:47:22.435899  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:22.436481  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:22.436518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:22.436426  435057 retry.go:31] will retry after 624.721191ms: waiting for machine to come up
	I0408 12:47:23.063619  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:23.064268  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:23.064290  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:23.064183  435057 retry.go:31] will retry after 1.072067437s: waiting for machine to come up
	I0408 12:47:24.137999  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:24.138573  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:24.138607  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:24.138517  435057 retry.go:31] will retry after 1.238721936s: waiting for machine to come up
	I0408 12:47:25.378512  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:25.378929  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:25.378956  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:25.378819  435057 retry.go:31] will retry after 1.314708825s: waiting for machine to come up
	I0408 12:47:26.461241  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.461305  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.461321  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.482518  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.482566  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.483554  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.497035  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.497075  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.983270  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.996515  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.996556  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.483125  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.491506  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.491549  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.983839  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.991044  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.991090  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.483669  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.490665  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:28.490703  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.983248  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.998278  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:47:29.007388  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:47:29.007429  433557 api_server.go:131] duration metric: took 7.024624495s to wait for apiserver health ...
	I0408 12:47:29.007444  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:29.007452  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:29.009506  433557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:25.561601  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0408 12:47:26.087896  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:26.116559  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:26.145651  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:26.174910  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:26.206627  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:26.238398  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:26.281684  433674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:26.306417  433674 ssh_runner.go:195] Run: openssl version
	I0408 12:47:26.313279  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:26.328106  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333727  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333810  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.340200  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:26.352316  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:26.364788  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.369928  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.370003  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.376525  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:26.388232  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:26.400301  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405327  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405407  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.411586  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:26.423764  433674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:26.428995  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:26.435932  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:26.442742  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:26.451458  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:26.458715  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:26.466424  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:26.473948  433674 kubeadm.go:391] StartCluster: {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:26.474083  433674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:26.474158  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.515603  433674 cri.go:89] found id: ""
	I0408 12:47:26.515676  433674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:26.526818  433674 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:26.526845  433674 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:26.526851  433674 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:26.526908  433674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:26.537675  433674 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:26.538807  433674 kubeconfig.go:125] found "embed-certs-488947" server: "https://192.168.72.159:8443"
	I0408 12:47:26.540848  433674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:26.551278  433674 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.159
	I0408 12:47:26.551317  433674 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:26.551330  433674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:26.551406  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.591372  433674 cri.go:89] found id: ""
	I0408 12:47:26.591478  433674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:26.610486  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:26.621770  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:26.621794  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:26.621869  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:26.632480  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:26.632554  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:26.645878  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:26.659969  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:26.660068  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:26.670611  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.680945  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:26.681034  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.692201  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:26.703049  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:26.703126  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:26.715887  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:26.727464  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:26.956245  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.722655  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.973294  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.086774  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.203640  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:28.203755  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:28.704550  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.203852  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.704305  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.724333  433674 api_server.go:72] duration metric: took 1.520681062s to wait for apiserver process to appear ...
	I0408 12:47:29.724372  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:29.724402  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:29.010843  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:29.029631  433557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:29.052609  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:29.069954  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:29.070010  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:29.070022  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:29.070034  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:29.070043  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:29.070049  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:47:29.070076  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:29.070087  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:29.070098  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:47:29.070107  433557 system_pods.go:74] duration metric: took 17.469317ms to wait for pod list to return data ...
	I0408 12:47:29.070117  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:29.075401  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:29.075443  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:29.075459  433557 node_conditions.go:105] duration metric: took 5.335891ms to run NodePressure ...
	I0408 12:47:29.075489  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:29.403218  433557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409235  433557 kubeadm.go:733] kubelet initialised
	I0408 12:47:29.409263  433557 kubeadm.go:734] duration metric: took 6.014758ms waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409276  433557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:29.418787  433557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.441264  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441310  433557 pod_ready.go:81] duration metric: took 22.478832ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.441325  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441336  433557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.461805  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461916  433557 pod_ready.go:81] duration metric: took 20.564997ms for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.461945  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461982  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.475160  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475198  433557 pod_ready.go:81] duration metric: took 13.191566ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.475229  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475241  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.486266  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486306  433557 pod_ready.go:81] duration metric: took 11.046794ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.486321  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486331  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.857658  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857703  433557 pod_ready.go:81] duration metric: took 371.357848ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.857717  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857725  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.258154  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258194  433557 pod_ready.go:81] duration metric: took 400.459219ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.258208  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258230  433557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.656845  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656890  433557 pod_ready.go:81] duration metric: took 398.64565ms for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.656904  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656915  433557 pod_ready.go:38] duration metric: took 1.247627349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:30.656947  433557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:47:30.683024  433557 ops.go:34] apiserver oom_adj: -16
	I0408 12:47:30.683055  433557 kubeadm.go:591] duration metric: took 12.247545723s to restartPrimaryControlPlane
	I0408 12:47:30.683067  433557 kubeadm.go:393] duration metric: took 12.302946s to StartCluster
	I0408 12:47:30.683095  433557 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.683214  433557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:30.685507  433557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.685852  433557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:47:30.687967  433557 out.go:177] * Verifying Kubernetes components...
	I0408 12:47:30.685951  433557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:47:30.686122  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:47:30.689462  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:30.689475  433557 addons.go:69] Setting storage-provisioner=true in profile "no-preload-135234"
	I0408 12:47:30.689511  433557 addons.go:234] Setting addon storage-provisioner=true in "no-preload-135234"
	W0408 12:47:30.689521  433557 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:47:30.689555  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.689573  433557 addons.go:69] Setting default-storageclass=true in profile "no-preload-135234"
	I0408 12:47:30.689620  433557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-135234"
	I0408 12:47:30.689956  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.689995  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.689996  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690026  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.690085  433557 addons.go:69] Setting metrics-server=true in profile "no-preload-135234"
	I0408 12:47:30.690135  433557 addons.go:234] Setting addon metrics-server=true in "no-preload-135234"
	W0408 12:47:30.690146  433557 addons.go:243] addon metrics-server should already be in state true
	I0408 12:47:30.690186  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.690614  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690692  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.710746  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0408 12:47:30.710947  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0408 12:47:30.711153  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0408 12:47:30.711301  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711752  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711839  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.712010  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712027  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712564  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.712757  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712780  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712911  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712926  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.713381  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.713427  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.713660  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714094  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714304  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.714365  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.714401  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.717892  433557 addons.go:234] Setting addon default-storageclass=true in "no-preload-135234"
	W0408 12:47:30.717959  433557 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:47:30.718004  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.718497  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.718577  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.734825  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0408 12:47:30.736890  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I0408 12:47:30.756599  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.756681  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.757290  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757312  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757318  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757332  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757774  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.757849  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.758015  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.758082  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.760658  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.760732  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.762999  433557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:47:30.764689  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:47:30.764714  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:47:30.766392  433557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:30.764741  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.767890  433557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:30.767911  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:47:30.767933  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.772580  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.772714  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773015  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773038  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773423  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773449  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773462  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773663  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773875  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.773897  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.774038  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774074  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774163  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.774227  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.779694  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0408 12:47:30.780190  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.780772  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.780793  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.781114  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.781773  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.781821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.803661  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0408 12:47:30.804212  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.804828  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.804847  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.805397  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.805713  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.807761  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.808244  433557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:30.808269  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:47:30.808288  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.811598  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812078  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.812109  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812264  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.812465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.812702  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.812868  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:26.695466  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:26.835234  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:26.835265  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:26.695884  435057 retry.go:31] will retry after 1.93787314s: waiting for machine to come up
	I0408 12:47:28.635479  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:28.636019  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:28.636052  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:28.635935  435057 retry.go:31] will retry after 1.906126524s: waiting for machine to come up
	I0408 12:47:30.544699  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:30.545145  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:30.545165  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:30.545084  435057 retry.go:31] will retry after 3.291404288s: waiting for machine to come up
	I0408 12:47:30.979880  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:31.004961  433557 node_ready.go:35] waiting up to 6m0s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:31.088114  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:31.110971  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:47:31.111017  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:47:31.150193  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:47:31.150229  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:47:31.184811  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.184899  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:47:31.214364  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.244802  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:32.406228  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.318067686s)
	I0408 12:47:32.406305  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406317  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.406830  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.406897  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.406913  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406921  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.407242  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407275  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407319  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.407329  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.532524  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.318098791s)
	I0408 12:47:32.532662  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532694  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.532576  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.287674494s)
	I0408 12:47:32.532774  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532799  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533022  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533041  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533052  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533060  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533223  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533280  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533286  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533294  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533301  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533457  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533516  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533539  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533546  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.534974  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.534991  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.535019  433557 addons.go:470] Verifying addon metrics-server=true in "no-preload-135234"
	I0408 12:47:32.543151  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.543183  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.543549  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.543571  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.546033  433557 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0408 12:47:32.894282  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:32.894320  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:32.894336  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:32.988397  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:32.988442  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.224790  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.232146  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.232176  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.724683  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.729479  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.729520  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:34.224919  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:34.230233  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:47:34.247835  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:47:34.247872  433674 api_server.go:131] duration metric: took 4.523492127s to wait for apiserver health ...
	I0408 12:47:34.247883  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:34.247890  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:34.249807  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:34.251603  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:34.265254  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:34.288078  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:34.301533  433674 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:34.301570  433674 system_pods.go:61] "coredns-76f75df574-hq2mm" [cfc7bd40-0b7d-4e00-ac55-b3ae796018ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:34.301577  433674 system_pods.go:61] "etcd-embed-certs-488947" [eb29ace5-8ad9-4080-a875-2eb83dcea583] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:34.301585  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [8e97033f-996a-4b64-9474-7b4d562eb1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:34.301591  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [b3db7631-d953-418e-9c72-f299d0287a2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:34.301595  433674 system_pods.go:61] "kube-proxy-2gn8m" [c31d8f0d-d6c1-4afa-b64c-7fc422d493f2] Running
	I0408 12:47:34.301600  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b9b29f85-7a75-4b09-b6cd-940ff42326d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:34.301604  433674 system_pods.go:61] "metrics-server-57f55c9bc5-z2ztl" [d9dc47ad-3370-4e55-a724-8c529c723992] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:34.301607  433674 system_pods.go:61] "storage-provisioner" [4953dc3a-31ca-464d-9530-34f488ed9a02] Running
	I0408 12:47:34.301617  433674 system_pods.go:74] duration metric: took 13.514139ms to wait for pod list to return data ...
	I0408 12:47:34.301624  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:34.305931  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:34.305962  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:34.305974  433674 node_conditions.go:105] duration metric: took 4.345624ms to run NodePressure ...
	I0408 12:47:34.305993  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:34.598392  433674 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603606  433674 kubeadm.go:733] kubelet initialised
	I0408 12:47:34.603632  433674 kubeadm.go:734] duration metric: took 5.204237ms waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603641  433674 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:34.610027  433674 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:32.547718  433557 addons.go:505] duration metric: took 1.861769291s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0408 12:47:33.008857  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:35.510251  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:33.837729  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:33.838183  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:33.838213  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:33.838133  435057 retry.go:31] will retry after 3.949072436s: waiting for machine to come up
	I0408 12:47:39.502172  433439 start.go:364] duration metric: took 55.254308447s to acquireMachinesLock for "default-k8s-diff-port-527454"
	I0408 12:47:39.502232  433439 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:39.502245  433439 fix.go:54] fixHost starting: 
	I0408 12:47:39.502725  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:39.502767  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:39.523738  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0408 12:47:39.525022  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:39.525614  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:47:39.525646  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:39.526077  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:39.526307  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:47:39.526448  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:47:39.528207  433439 fix.go:112] recreateIfNeeded on default-k8s-diff-port-527454: state=Stopped err=<nil>
	I0408 12:47:39.528241  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	W0408 12:47:39.528449  433439 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:39.530360  433439 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-527454" ...
	I0408 12:47:36.618430  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.619713  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.009213  433557 node_ready.go:49] node "no-preload-135234" has status "Ready":"True"
	I0408 12:47:38.009241  433557 node_ready.go:38] duration metric: took 7.004239102s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:38.009250  433557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:38.014665  433557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020024  433557 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:38.020054  433557 pod_ready.go:81] duration metric: took 5.358174ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020067  433557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:40.030803  433557 pod_ready.go:102] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:37.789177  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789704  433881 main.go:141] libmachine: (old-k8s-version-384148) Found IP for machine: 192.168.39.245
	I0408 12:47:37.789740  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has current primary IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789750  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserving static IP address...
	I0408 12:47:37.790172  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.790212  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | skip adding static IP to network mk-old-k8s-version-384148 - found existing host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"}
	I0408 12:47:37.790227  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserved static IP address: 192.168.39.245
	I0408 12:47:37.790244  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting for SSH to be available...
	I0408 12:47:37.790259  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Getting to WaitForSSH function...
	I0408 12:47:37.792465  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792793  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.792829  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792884  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH client type: external
	I0408 12:47:37.792932  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa (-rw-------)
	I0408 12:47:37.792974  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:37.793007  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | About to run SSH command:
	I0408 12:47:37.793018  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | exit 0
	I0408 12:47:37.920427  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:37.920854  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:47:37.921644  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:37.924168  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924631  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.924663  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924954  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:47:37.925170  433881 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:37.925191  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:37.925526  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:37.928176  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928552  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.928583  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928740  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:37.928916  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929095  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929260  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:37.929421  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:37.929626  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:37.929637  433881 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:38.044349  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:38.044378  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044695  433881 buildroot.go:166] provisioning hostname "old-k8s-version-384148"
	I0408 12:47:38.044728  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044955  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.047788  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048116  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.048149  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048291  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.048487  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.049024  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.049242  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.049258  433881 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-384148 && echo "old-k8s-version-384148" | sudo tee /etc/hostname
	I0408 12:47:38.175102  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-384148
	
	I0408 12:47:38.175132  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.178015  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178431  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.178461  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178659  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.178872  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179057  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179198  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.179347  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.179578  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.179604  433881 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-384148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-384148/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-384148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:38.306997  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:38.307037  433881 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:38.307072  433881 buildroot.go:174] setting up certificates
	I0408 12:47:38.307088  433881 provision.go:84] configureAuth start
	I0408 12:47:38.307099  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.307464  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:38.310078  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310595  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.310643  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.313155  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313521  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.313551  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313694  433881 provision.go:143] copyHostCerts
	I0408 12:47:38.313748  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:38.313768  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:38.313829  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:38.313919  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:38.313927  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:38.313945  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:38.314007  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:38.314014  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:38.314031  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:38.314080  433881 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-384148 san=[127.0.0.1 192.168.39.245 localhost minikube old-k8s-version-384148]
	I0408 12:47:38.748791  433881 provision.go:177] copyRemoteCerts
	I0408 12:47:38.748865  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:38.748895  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.752034  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752458  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.752499  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752695  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.752900  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.753075  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.753266  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:38.849144  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:47:38.880279  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:38.907293  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:38.936116  433881 provision.go:87] duration metric: took 629.014723ms to configureAuth
	I0408 12:47:38.936152  433881 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:38.936321  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:47:38.936403  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.939013  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939399  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.939457  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939593  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.939861  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940059  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940215  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.940377  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.940622  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.940648  433881 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:39.241516  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:39.241543  433881 machine.go:97] duration metric: took 1.316359736s to provisionDockerMachine
	I0408 12:47:39.241554  433881 start.go:293] postStartSetup for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:47:39.241566  433881 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:39.241585  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.241901  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:39.241935  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.244908  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245307  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.245336  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245486  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.245692  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.245890  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.246051  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.333612  433881 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:39.338826  433881 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:39.338853  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:39.338919  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:39.338988  433881 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:39.339071  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:39.352064  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:39.380881  433881 start.go:296] duration metric: took 139.30723ms for postStartSetup
	I0408 12:47:39.380939  433881 fix.go:56] duration metric: took 20.599293118s for fixHost
	I0408 12:47:39.380970  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.384147  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384556  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.384610  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384795  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.385010  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385212  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385411  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.385627  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:39.385869  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:39.385885  433881 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:39.501982  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580459.470646239
	
	I0408 12:47:39.502031  433881 fix.go:216] guest clock: 1712580459.470646239
	I0408 12:47:39.502042  433881 fix.go:229] Guest: 2024-04-08 12:47:39.470646239 +0000 UTC Remote: 2024-04-08 12:47:39.38094595 +0000 UTC m=+307.818603739 (delta=89.700289ms)
	I0408 12:47:39.502073  433881 fix.go:200] guest clock delta is within tolerance: 89.700289ms
	I0408 12:47:39.502084  433881 start.go:83] releasing machines lock for "old-k8s-version-384148", held for 20.720472846s
	I0408 12:47:39.502114  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.502407  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:39.505864  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506319  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.506352  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506704  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507318  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507574  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507677  433881 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:39.507767  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.507908  433881 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:39.507932  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.510993  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511077  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511476  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511522  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511563  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511589  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511743  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511923  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512084  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512093  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512239  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512246  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.512413  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.633304  433881 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:39.642014  433881 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:39.804068  433881 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:39.812237  433881 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:39.812324  433881 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:39.835586  433881 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:39.835621  433881 start.go:494] detecting cgroup driver to use...
	I0408 12:47:39.835721  433881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:39.860378  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:39.882019  433881 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:39.882096  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:39.898112  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:39.913562  433881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:40.047449  433881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:40.188730  433881 docker.go:233] disabling docker service ...
	I0408 12:47:40.188822  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:40.205050  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:40.222432  433881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:40.386332  433881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:40.561583  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:40.582135  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:40.611648  433881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:47:40.611751  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.629357  433881 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:40.629458  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.646030  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.661349  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.674997  433881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:40.688255  433881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:40.706703  433881 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:40.706763  433881 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:40.724839  433881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:40.738018  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:40.906300  433881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:41.073054  433881 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:41.073141  433881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:41.078610  433881 start.go:562] Will wait 60s for crictl version
	I0408 12:47:41.078679  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:41.083133  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:41.126948  433881 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:41.127101  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.160091  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.195044  433881 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:47:41.196514  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:41.199376  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.199831  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:41.199860  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.200145  433881 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:41.204867  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:41.221274  433881 kubeadm.go:877] updating cluster {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:41.221469  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:47:41.221550  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:41.275430  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:41.275531  433881 ssh_runner.go:195] Run: which lz4
	I0408 12:47:41.280606  433881 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:41.285549  433881 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:41.285606  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:47:39.531815  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Start
	I0408 12:47:39.531988  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring networks are active...
	I0408 12:47:39.532969  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network default is active
	I0408 12:47:39.533486  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network mk-default-k8s-diff-port-527454 is active
	I0408 12:47:39.533947  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Getting domain xml...
	I0408 12:47:39.534767  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Creating domain...
	I0408 12:47:40.935150  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting to get IP...
	I0408 12:47:40.936250  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:40.936778  435248 retry.go:31] will retry after 215.442539ms: waiting for machine to come up
	I0408 12:47:41.154393  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154940  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.154852  435248 retry.go:31] will retry after 274.982374ms: waiting for machine to come up
	I0408 12:47:41.431442  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.431990  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.432023  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.431933  435248 retry.go:31] will retry after 335.077282ms: waiting for machine to come up
	I0408 12:47:40.620537  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:42.622241  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:44.118493  433674 pod_ready.go:92] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.118532  433674 pod_ready.go:81] duration metric: took 9.508474788s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.118545  433674 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626843  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.626869  433674 pod_ready.go:81] duration metric: took 508.318376ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626882  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633488  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.633521  433674 pod_ready.go:81] duration metric: took 6.630145ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633535  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027744  433557 pod_ready.go:92] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.027771  433557 pod_ready.go:81] duration metric: took 3.007695895s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027782  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034038  433557 pod_ready.go:92] pod "kube-apiserver-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.034076  433557 pod_ready.go:81] duration metric: took 6.28617ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034090  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039232  433557 pod_ready.go:92] pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.039262  433557 pod_ready.go:81] duration metric: took 5.161613ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039277  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045793  433557 pod_ready.go:92] pod "kube-proxy-tr6td" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.045887  433557 pod_ready.go:81] duration metric: took 6.600896ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045908  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.209976  433557 pod_ready.go:92] pod "kube-scheduler-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.210003  433557 pod_ready.go:81] duration metric: took 164.085848ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.210018  433557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:43.220338  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:45.718170  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:43.224219  433881 crio.go:462] duration metric: took 1.943671791s to copy over tarball
	I0408 12:47:43.224306  433881 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:41.768734  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769194  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.769131  435248 retry.go:31] will retry after 581.590127ms: waiting for machine to come up
	I0408 12:47:42.352156  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.352975  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.353017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:42.352850  435248 retry.go:31] will retry after 673.545679ms: waiting for machine to come up
	I0408 12:47:43.028329  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029066  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029101  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.028956  435248 retry.go:31] will retry after 690.795418ms: waiting for machine to come up
	I0408 12:47:43.721435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.721999  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.722025  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.721948  435248 retry.go:31] will retry after 941.917321ms: waiting for machine to come up
	I0408 12:47:44.665002  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665468  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665495  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:44.665406  435248 retry.go:31] will retry after 1.037587737s: waiting for machine to come up
	I0408 12:47:45.705319  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705792  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705822  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:45.705730  435248 retry.go:31] will retry after 1.287151334s: waiting for machine to come up
	I0408 12:47:46.640995  433674 pod_ready.go:102] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:48.558627  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.558666  433674 pod_ready.go:81] duration metric: took 3.925119514s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.558683  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583378  433674 pod_ready.go:92] pod "kube-proxy-2gn8m" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.583405  433674 pod_ready.go:81] duration metric: took 24.715384ms for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583416  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598937  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.598969  433674 pod_ready.go:81] duration metric: took 15.544342ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598983  433674 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:47.918307  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:50.219513  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:46.621677  433881 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.397321627s)
	I0408 12:47:46.881725  433881 crio.go:469] duration metric: took 3.657463869s to extract the tarball
	I0408 12:47:46.881748  433881 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:46.936087  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:46.980999  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:46.981031  433881 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:46.981086  433881 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.981115  433881 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.981160  433881 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:46.981180  433881 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.981197  433881 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.981206  433881 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.981332  433881 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.981525  433881 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.983461  433881 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983449  433881 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.983481  433881 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.983501  433881 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.983517  433881 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.983495  433881 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.215815  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.218682  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.218812  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:47:47.226057  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.237986  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.249572  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.255059  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.331367  433881 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:47:47.331429  433881 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.331484  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.403757  433881 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:47:47.403846  433881 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.403899  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.408643  433881 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:47:47.408702  433881 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:47:47.408755  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443551  433881 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:47:47.443589  433881 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:47:47.443609  433881 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.443626  433881 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.443678  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443682  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453637  433881 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:47:47.453695  433881 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.453749  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453825  433881 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:47:47.453864  433881 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.453884  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.453908  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453990  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.454014  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:47:47.456910  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.457446  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.569243  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:47:47.569295  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.569320  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.583668  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:47:47.583967  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:47:47.589545  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:47:47.589707  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:47:47.638036  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:47:47.639955  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:47:47.860567  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:48.010273  433881 cache_images.go:92] duration metric: took 1.029223281s to LoadCachedImages
	W0408 12:47:48.010419  433881 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0408 12:47:48.010440  433881 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.20.0 crio true true} ...
	I0408 12:47:48.010631  433881 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-384148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:48.010729  433881 ssh_runner.go:195] Run: crio config
	I0408 12:47:48.065431  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:47:48.065461  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:48.065478  433881 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:48.065504  433881 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-384148 NodeName:old-k8s-version-384148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:47:48.065684  433881 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-384148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:48.065779  433881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:47:48.080840  433881 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:48.080950  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:48.094581  433881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 12:47:48.117392  433881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:48.138262  433881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:47:48.165039  433881 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:48.171191  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:48.189417  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:48.341553  433881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:48.363215  433881 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148 for IP: 192.168.39.245
	I0408 12:47:48.363249  433881 certs.go:194] generating shared ca certs ...
	I0408 12:47:48.363272  433881 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:48.363473  433881 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:48.363571  433881 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:48.363589  433881 certs.go:256] generating profile certs ...
	I0408 12:47:48.426881  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key
	I0408 12:47:48.427040  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1
	I0408 12:47:48.427110  433881 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key
	I0408 12:47:48.427261  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:48.427310  433881 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:48.427321  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:48.427354  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:48.427422  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:48.427462  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:48.427523  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:48.428524  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:48.476520  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:48.522452  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:48.561710  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:48.607052  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 12:47:48.651541  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:48.704207  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:48.742684  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:48.772703  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:48.803476  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:48.833154  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:48.863183  433881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:48.885940  433881 ssh_runner.go:195] Run: openssl version
	I0408 12:47:48.894847  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:48.910969  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916386  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916449  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.923008  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:48.936122  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:48.952344  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957735  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957815  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.964720  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:48.978862  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:48.993113  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998835  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998906  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:49.005710  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:49.019197  433881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:49.024728  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:49.031831  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:49.038736  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:49.045946  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:49.053040  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:49.060064  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:49.066969  433881 kubeadm.go:391] StartCluster: {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:49.067090  433881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:49.067156  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.107266  433881 cri.go:89] found id: ""
	I0408 12:47:49.107336  433881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:49.120092  433881 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:49.120126  433881 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:49.120132  433881 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:49.120190  433881 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:49.133500  433881 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:49.134686  433881 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:49.135619  433881 kubeconfig.go:62] /home/jenkins/minikube-integration/18588-368424/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-384148" cluster setting kubeconfig missing "old-k8s-version-384148" context setting]
	I0408 12:47:49.136897  433881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:49.139048  433881 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:49.154878  433881 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.245
	I0408 12:47:49.154925  433881 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:49.154941  433881 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:49.155009  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.207364  433881 cri.go:89] found id: ""
	I0408 12:47:49.207445  433881 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:49.228390  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:49.245160  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:49.245193  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:49.245266  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:49.256832  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:49.256913  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:49.268773  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:49.282821  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:49.282898  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:49.297896  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.312075  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:49.312158  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.327398  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:49.341467  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:49.341604  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:49.354096  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:49.366717  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:49.514951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.442724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.716276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.833506  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.927655  433881 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:50.927798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:51.428588  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:46.994162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994640  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994672  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:46.994593  435248 retry.go:31] will retry after 1.863771905s: waiting for machine to come up
	I0408 12:47:48.860673  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861257  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:48.861151  435248 retry.go:31] will retry after 2.204894376s: waiting for machine to come up
	I0408 12:47:51.067423  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067909  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067937  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:51.067864  435248 retry.go:31] will retry after 2.625423179s: waiting for machine to come up
	I0408 12:47:50.608007  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:53.108084  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:52.717545  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:55.218944  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:51.928035  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.427844  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.928718  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.927869  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.428707  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.928798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.427884  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.928273  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:56.427941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.695295  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695826  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695862  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:53.695772  435248 retry.go:31] will retry after 4.111917473s: waiting for machine to come up
	I0408 12:47:55.606909  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:58.111708  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:57.717559  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:59.718066  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:56.927927  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.428068  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.928800  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.427871  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.927822  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.428740  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.927924  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.427948  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.928792  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:01.428657  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.809179  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809697  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809729  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:57.809632  435248 retry.go:31] will retry after 4.27502806s: waiting for machine to come up
	I0408 12:48:02.086033  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086558  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has current primary IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086586  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Found IP for machine: 192.168.50.7
	I0408 12:48:02.086603  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserving static IP address...
	I0408 12:48:02.087069  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.087105  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserved static IP address: 192.168.50.7
	I0408 12:48:02.087137  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | skip adding static IP to network mk-default-k8s-diff-port-527454 - found existing host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"}
	I0408 12:48:02.087158  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Getting to WaitForSSH function...
	I0408 12:48:02.087177  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for SSH to be available...
	I0408 12:48:02.089228  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089585  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.089608  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH client type: external
	I0408 12:48:02.089840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa (-rw-------)
	I0408 12:48:02.089885  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:48:02.089900  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | About to run SSH command:
	I0408 12:48:02.089917  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | exit 0
	I0408 12:48:02.216245  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | SSH cmd err, output: <nil>: 
	I0408 12:48:02.216684  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetConfigRaw
	I0408 12:48:02.217582  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.220543  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.220961  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.220995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.221282  433439 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/config.json ...
	I0408 12:48:02.221480  433439 machine.go:94] provisionDockerMachine start ...
	I0408 12:48:02.221499  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:02.221738  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.224371  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.224770  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.224802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.225007  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.225236  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225399  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225548  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.225740  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.225957  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.225970  433439 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:48:02.336716  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:48:02.336754  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337074  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:48:02.337108  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337351  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.340133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340539  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.340583  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340653  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.340842  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341016  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341171  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.341346  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.341539  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.341556  433439 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-527454 && echo "default-k8s-diff-port-527454" | sudo tee /etc/hostname
	I0408 12:48:02.464462  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-527454
	
	I0408 12:48:02.464507  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.467682  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468082  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.468118  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468335  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.468595  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468782  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468954  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.469154  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.469372  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.469392  433439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-527454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-527454/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-527454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:48:02.593971  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:48:02.594006  433439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:48:02.594061  433439 buildroot.go:174] setting up certificates
	I0408 12:48:02.594078  433439 provision.go:84] configureAuth start
	I0408 12:48:02.594092  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.594431  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.597587  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.598043  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.600898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601267  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.601299  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601497  433439 provision.go:143] copyHostCerts
	I0408 12:48:02.601562  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:48:02.601588  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:48:02.601653  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:48:02.601841  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:48:02.601857  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:48:02.601888  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:48:02.601966  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:48:02.601981  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:48:02.602010  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:48:02.602088  433439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-527454 san=[127.0.0.1 192.168.50.7 default-k8s-diff-port-527454 localhost minikube]
	I0408 12:48:02.845116  433439 provision.go:177] copyRemoteCerts
	I0408 12:48:02.845190  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:48:02.845217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.848054  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.848406  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848559  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.848817  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.848986  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.849125  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:02.934223  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:48:02.962726  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0408 12:48:02.992767  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:48:03.021973  433439 provision.go:87] duration metric: took 427.87874ms to configureAuth
	I0408 12:48:03.022009  433439 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:48:03.022270  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:48:03.022382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.025407  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025765  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.025802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025959  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.026215  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026379  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026510  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.026659  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.026834  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.026856  433439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:48:03.310263  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:48:03.310307  433439 machine.go:97] duration metric: took 1.088813603s to provisionDockerMachine
	I0408 12:48:03.310323  433439 start.go:293] postStartSetup for "default-k8s-diff-port-527454" (driver="kvm2")
	I0408 12:48:03.310337  433439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:48:03.310362  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.310758  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:48:03.310799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.313533  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.313968  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.314001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.314201  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.314375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.314584  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.314760  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.400087  433439 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:48:03.405240  433439 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:48:03.405272  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:48:03.405351  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:48:03.405450  433439 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:48:03.405570  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:48:03.415947  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:03.448935  433439 start.go:296] duration metric: took 138.593583ms for postStartSetup
	I0408 12:48:03.449025  433439 fix.go:56] duration metric: took 23.946779964s for fixHost
	I0408 12:48:03.449055  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.452026  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452392  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.452435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452630  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.452844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453063  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453248  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.453420  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.453604  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.453615  433439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:48:03.565710  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580483.551031252
	
	I0408 12:48:03.565738  433439 fix.go:216] guest clock: 1712580483.551031252
	I0408 12:48:03.565750  433439 fix.go:229] Guest: 2024-04-08 12:48:03.551031252 +0000 UTC Remote: 2024-04-08 12:48:03.44903588 +0000 UTC m=+361.760256784 (delta=101.995372ms)
	I0408 12:48:03.565777  433439 fix.go:200] guest clock delta is within tolerance: 101.995372ms
	I0408 12:48:03.565787  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 24.063582343s
	I0408 12:48:03.565806  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.566106  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:03.569409  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.569776  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.569814  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.570017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570577  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570831  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570952  433439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:48:03.571021  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.571121  433439 ssh_runner.go:195] Run: cat /version.json
	I0408 12:48:03.571146  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.573939  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574167  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574300  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574333  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574469  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574594  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574621  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574674  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.574757  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574871  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.574957  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.575130  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.575441  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.575590  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.695930  433439 ssh_runner.go:195] Run: systemctl --version
	I0408 12:48:03.702915  433439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:48:03.853737  433439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:48:03.860218  433439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:48:03.860287  433439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:48:03.877827  433439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:48:03.877861  433439 start.go:494] detecting cgroup driver to use...
	I0408 12:48:03.877943  433439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:48:03.897232  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:48:03.913028  433439 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:48:03.913112  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:48:03.929574  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:48:03.946880  433439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:48:04.083524  433439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:48:04.243842  433439 docker.go:233] disabling docker service ...
	I0408 12:48:04.243938  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:48:04.260459  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:48:04.276119  433439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:48:04.428999  433439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:48:04.571431  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:48:04.589661  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:48:04.612872  433439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:48:04.612954  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.625841  433439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:48:04.625939  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.638868  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.652106  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.664883  433439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:48:04.678149  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.691069  433439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.711329  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.725917  433439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:48:04.738875  433439 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:48:04.738941  433439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:48:04.756784  433439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:48:04.769852  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:04.895658  433439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:48:05.056165  433439 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:48:05.056270  433439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:48:05.061838  433439 start.go:562] Will wait 60s for crictl version
	I0408 12:48:05.061918  433439 ssh_runner.go:195] Run: which crictl
	I0408 12:48:05.066280  433439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:48:05.110966  433439 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:48:05.111084  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.142272  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.176138  433439 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:48:00.606508  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:03.107018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:05.109926  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:02.220836  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:04.718465  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:01.928628  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.427857  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.927917  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.428824  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.428084  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.928751  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.428193  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.927854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.427836  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.177382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:05.180028  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180334  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:05.180363  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180635  433439 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 12:48:05.185436  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:05.199001  433439 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:48:05.199130  433439 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:48:05.199174  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:05.239255  433439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:48:05.239358  433439 ssh_runner.go:195] Run: which lz4
	I0408 12:48:05.244115  433439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:48:05.249135  433439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:48:05.249169  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:48:07.606284  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.607161  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.720025  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.219059  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.928222  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.427868  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.927863  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.428510  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.928662  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.427932  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.928613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.928934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.428085  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.889921  433439 crio.go:462] duration metric: took 1.645848876s to copy over tarball
	I0408 12:48:06.890006  433439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:48:09.403589  433439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513555281s)
	I0408 12:48:09.403620  433439 crio.go:469] duration metric: took 2.513669951s to extract the tarball
	I0408 12:48:09.403627  433439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:48:09.446487  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:09.494576  433439 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:48:09.494606  433439 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:48:09.494614  433439 kubeadm.go:928] updating node { 192.168.50.7 8444 v1.29.3 crio true true} ...
	I0408 12:48:09.494822  433439 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-527454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:48:09.494917  433439 ssh_runner.go:195] Run: crio config
	I0408 12:48:09.541809  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:09.541839  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:09.541859  433439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:48:09.541887  433439 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-527454 NodeName:default-k8s-diff-port-527454 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:48:09.542105  433439 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-527454"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:48:09.542201  433439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:48:09.553494  433439 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:48:09.553591  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:48:09.564970  433439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0408 12:48:09.584888  433439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:48:09.604538  433439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0408 12:48:09.623993  433439 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0408 12:48:09.628368  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:09.642170  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:09.789791  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:48:09.808943  433439 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454 for IP: 192.168.50.7
	I0408 12:48:09.808972  433439 certs.go:194] generating shared ca certs ...
	I0408 12:48:09.808995  433439 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:48:09.809194  433439 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:48:09.809242  433439 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:48:09.809253  433439 certs.go:256] generating profile certs ...
	I0408 12:48:09.809344  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/client.key
	I0408 12:48:09.809415  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key.ad1d04eb
	I0408 12:48:09.809457  433439 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key
	I0408 12:48:09.809645  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:48:09.809699  433439 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:48:09.809713  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:48:09.809742  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:48:09.809764  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:48:09.809792  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:48:09.809851  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:09.810516  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:48:09.866085  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:48:09.899718  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:48:09.941704  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:48:09.976180  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0408 12:48:10.014420  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:48:10.044380  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:48:10.072034  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:48:10.099417  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:48:10.126143  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:48:10.154244  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:48:10.183954  433439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:48:10.207277  433439 ssh_runner.go:195] Run: openssl version
	I0408 12:48:10.213691  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:48:10.228406  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233736  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233798  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.240236  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:48:10.253382  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:48:10.267783  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273234  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273318  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.279925  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:48:10.292710  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:48:10.305381  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310629  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310703  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.317063  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:48:10.330320  433439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:48:10.336138  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:48:10.343341  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:48:10.350536  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:48:10.357665  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:48:10.364925  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:48:10.372314  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:48:10.380001  433439 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:48:10.380107  433439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:48:10.380174  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.425378  433439 cri.go:89] found id: ""
	I0408 12:48:10.425475  433439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:48:10.438972  433439 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:48:10.439000  433439 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:48:10.439005  433439 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:48:10.439051  433439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:48:10.452072  433439 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:48:10.453410  433439 kubeconfig.go:125] found "default-k8s-diff-port-527454" server: "https://192.168.50.7:8444"
	I0408 12:48:10.456022  433439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:48:10.469116  433439 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0408 12:48:10.469171  433439 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:48:10.469188  433439 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:48:10.469256  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.517874  433439 cri.go:89] found id: ""
	I0408 12:48:10.517969  433439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:48:10.538088  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:48:10.551560  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:48:10.551580  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:48:10.551636  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:48:10.564123  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:48:10.564209  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:48:10.578691  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:48:10.590692  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:48:10.590765  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:48:10.602902  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.616831  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:48:10.616922  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.629213  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:48:10.641625  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:48:10.641709  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:48:10.653162  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:48:10.665261  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:10.811712  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.107002  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.606976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:12.188805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.221750  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:11.928656  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.427975  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.927923  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.428494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.928608  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.427852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.927874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.427855  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:16.427929  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.901885  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.09013292s)
	I0408 12:48:11.975836  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.237051  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.329550  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.460345  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:48:12.460457  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.961443  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.460681  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.520828  433439 api_server.go:72] duration metric: took 1.060470201s to wait for apiserver process to appear ...
	I0408 12:48:13.520866  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:48:13.520899  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:13.521407  433439 api_server.go:269] stopped: https://192.168.50.7:8444/healthz: Get "https://192.168.50.7:8444/healthz": dial tcp 192.168.50.7:8444: connect: connection refused
	I0408 12:48:14.022007  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.564485  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.564526  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:16.564543  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.617870  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.617904  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:17.021290  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.026545  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.026578  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:17.521124  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.529552  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.529596  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:18.021125  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:18.037000  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:48:18.049656  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:48:18.049699  433439 api_server.go:131] duration metric: took 4.528823991s to wait for apiserver health ...
	I0408 12:48:18.049722  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:18.049730  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:18.051495  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:48:16.607222  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:18.607837  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.717612  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:19.217050  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.928269  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.427867  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.428658  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.928649  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.428746  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.928734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.427874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.927842  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:21.427823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.052916  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:48:18.072115  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:48:18.111408  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:48:18.130585  433439 system_pods.go:59] 8 kube-system pods found
	I0408 12:48:18.130629  433439 system_pods.go:61] "coredns-76f75df574-r99kj" [171e271b-eec6-4238-afb1-82a2f228c225] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:48:18.130641  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [7019f1eb-58ef-4b1f-acf3-ed3c1ed84623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:48:18.130651  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [80ccd16d-d883-4c92-bb13-abe2d412532c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:48:18.130661  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [78d513aa-1f24-42c0-bfb9-4c20fdee63f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:48:18.130669  433439 system_pods.go:61] "kube-proxy-ztmmc" [de09a26e-cd95-401a-b575-977fcd660c47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 12:48:18.130683  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [eac4d549-1763-45b8-be11-b3b9e83f5110] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:48:18.130702  433439 system_pods.go:61] "metrics-server-57f55c9bc5-44qbm" [52631fc6-84d0-443b-ba42-de35a65db0f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:48:18.130714  433439 system_pods.go:61] "storage-provisioner" [82e8b0d0-6c22-4644-8bd1-b48887b0fe82] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 12:48:18.130730  433439 system_pods.go:74] duration metric: took 19.293309ms to wait for pod list to return data ...
	I0408 12:48:18.130745  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:48:18.135625  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:48:18.135663  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:48:18.135679  433439 node_conditions.go:105] duration metric: took 4.924641ms to run NodePressure ...
	I0408 12:48:18.135724  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:18.416272  433439 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424302  433439 kubeadm.go:733] kubelet initialised
	I0408 12:48:18.424325  433439 kubeadm.go:734] duration metric: took 8.015642ms waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424342  433439 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:48:18.436706  433439 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.447063  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447102  433439 pod_ready.go:81] duration metric: took 10.361708ms for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.447116  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447126  433439 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.460464  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460496  433439 pod_ready.go:81] duration metric: took 13.357612ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.460513  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460523  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.469991  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470035  433439 pod_ready.go:81] duration metric: took 9.502493ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.470072  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470083  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.516886  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516920  433439 pod_ready.go:81] duration metric: took 46.823396ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.516933  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516940  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915101  433439 pod_ready.go:92] pod "kube-proxy-ztmmc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:18.915131  433439 pod_ready.go:81] duration metric: took 398.182437ms for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915144  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:20.922456  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.107083  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.108249  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.219995  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.718091  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.928654  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.428887  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.928103  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.428482  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.928236  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.428613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.928054  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.428566  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.927852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.428729  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.922607  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:24.922155  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:24.922185  433439 pod_ready.go:81] duration metric: took 6.007031338s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:24.922200  433439 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:25.607653  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.216429  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.218553  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.717516  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.427853  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.928281  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.428354  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.928419  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.427934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.427840  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.931412  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:29.430930  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.608369  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:33.107424  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:32.717551  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.216256  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:31.928618  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.928067  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.428776  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.928583  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.428774  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.928033  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.428825  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.928696  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.428311  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.931958  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:34.430950  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.607018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.607820  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:40.106361  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.217721  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:39.218016  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:36.928915  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.427831  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.928429  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.428001  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.927802  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.427845  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.928013  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:41.428569  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.929987  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:38.931900  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.429986  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:42.605609  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:44.606744  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.717196  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:43.718405  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.428794  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.927856  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.428217  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.928796  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.428756  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.927829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.428563  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.927812  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:46.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.430411  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:45.932993  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.607058  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.607716  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.216568  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.218325  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.718153  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.928607  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.427829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.928499  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.428241  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.928393  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.428488  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.927941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.428003  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.928815  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:50.928888  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:50.970680  433881 cri.go:89] found id: ""
	I0408 12:48:50.970713  433881 logs.go:276] 0 containers: []
	W0408 12:48:50.970725  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:50.970733  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:50.970799  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:51.009804  433881 cri.go:89] found id: ""
	I0408 12:48:51.009838  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.009848  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:51.009854  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:51.009909  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:51.049581  433881 cri.go:89] found id: ""
	I0408 12:48:51.049617  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.049626  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:51.049633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:51.049706  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:51.086286  433881 cri.go:89] found id: ""
	I0408 12:48:51.086314  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.086323  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:51.086329  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:51.086395  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:51.126888  433881 cri.go:89] found id: ""
	I0408 12:48:51.126916  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.126927  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:51.126935  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:51.126998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:51.168650  433881 cri.go:89] found id: ""
	I0408 12:48:51.168684  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.168695  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:51.168702  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:51.168759  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:51.205661  433881 cri.go:89] found id: ""
	I0408 12:48:51.205693  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.205706  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:51.205714  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:51.205782  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:51.245659  433881 cri.go:89] found id: ""
	I0408 12:48:51.245699  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.245711  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:51.245725  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:51.245742  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:51.310079  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:51.310120  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:51.354093  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:51.354124  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:51.405031  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:51.405074  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:51.421147  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:51.421183  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:51.547658  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:48.430488  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.432250  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:51.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.606447  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.217434  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:55.717265  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.047880  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:54.062872  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:54.062960  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:54.109041  433881 cri.go:89] found id: ""
	I0408 12:48:54.109068  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.109079  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:54.109087  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:54.109209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:54.150194  433881 cri.go:89] found id: ""
	I0408 12:48:54.150223  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.150231  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:54.150237  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:54.150292  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:54.191735  433881 cri.go:89] found id: ""
	I0408 12:48:54.191767  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.191785  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:54.191792  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:54.191872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:54.251766  433881 cri.go:89] found id: ""
	I0408 12:48:54.251798  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.251807  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:54.251813  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:54.251878  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:54.292179  433881 cri.go:89] found id: ""
	I0408 12:48:54.292215  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.292229  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:54.292237  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:54.292311  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:54.329338  433881 cri.go:89] found id: ""
	I0408 12:48:54.329368  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.329380  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:54.329389  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:54.329458  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:54.377094  433881 cri.go:89] found id: ""
	I0408 12:48:54.377132  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.377144  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:54.377153  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:54.377227  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:54.415835  433881 cri.go:89] found id: ""
	I0408 12:48:54.415865  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.415873  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:54.415884  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:54.415896  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:54.471985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:54.472040  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:54.487674  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:54.487727  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:54.575138  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:54.575161  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:54.575176  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:54.647315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:54.647364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:52.928902  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.931253  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:56.106505  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.108187  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.218754  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.718600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:57.189969  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:57.204122  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:57.204201  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:57.241210  433881 cri.go:89] found id: ""
	I0408 12:48:57.241243  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.241252  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:57.241258  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:57.241310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:57.279553  433881 cri.go:89] found id: ""
	I0408 12:48:57.279591  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.279600  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:57.279606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:57.279658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:57.323516  433881 cri.go:89] found id: ""
	I0408 12:48:57.323560  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.323585  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:57.323593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:57.323663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:57.363723  433881 cri.go:89] found id: ""
	I0408 12:48:57.363755  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.363766  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:57.363772  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:57.363839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:57.400144  433881 cri.go:89] found id: ""
	I0408 12:48:57.400178  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.400190  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:57.400208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:57.400274  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:57.441875  433881 cri.go:89] found id: ""
	I0408 12:48:57.441907  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.441919  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:57.441928  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:57.441999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:57.478024  433881 cri.go:89] found id: ""
	I0408 12:48:57.478057  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.478066  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:57.478074  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:57.478144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:57.516602  433881 cri.go:89] found id: ""
	I0408 12:48:57.516633  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.516642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:57.516652  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:57.516666  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:57.573832  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:57.573883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:57.590751  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:57.590793  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:57.670650  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:57.670679  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:57.670698  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:57.746440  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:57.746488  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:00.291359  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:00.306024  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:00.306116  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:00.352262  433881 cri.go:89] found id: ""
	I0408 12:49:00.352294  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.352305  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:00.352314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:00.352390  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:00.392371  433881 cri.go:89] found id: ""
	I0408 12:49:00.392403  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.392415  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:00.392423  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:00.392488  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:00.434848  433881 cri.go:89] found id: ""
	I0408 12:49:00.434876  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.434885  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:00.434892  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:00.434951  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:00.476998  433881 cri.go:89] found id: ""
	I0408 12:49:00.477032  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.477045  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:00.477054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:00.477128  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:00.514520  433881 cri.go:89] found id: ""
	I0408 12:49:00.514560  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.514569  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:00.514575  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:00.514643  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:00.555942  433881 cri.go:89] found id: ""
	I0408 12:49:00.555981  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.555996  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:00.556005  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:00.556074  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:00.603944  433881 cri.go:89] found id: ""
	I0408 12:49:00.604053  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.604079  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:00.604097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:00.604193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:00.660591  433881 cri.go:89] found id: ""
	I0408 12:49:00.660628  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.660642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:00.660655  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:00.660677  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:00.731774  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:00.731821  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:00.747891  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:00.747947  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:00.827051  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:00.827085  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:00.827100  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:00.907231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:00.907280  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:57.431032  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:59.930470  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.608450  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.106647  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.218064  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.460014  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:03.474615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:03.474716  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:03.513072  433881 cri.go:89] found id: ""
	I0408 12:49:03.513106  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.513115  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:03.513122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:03.513179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:03.549307  433881 cri.go:89] found id: ""
	I0408 12:49:03.549349  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.549358  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:03.549364  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:03.549508  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:03.587463  433881 cri.go:89] found id: ""
	I0408 12:49:03.587503  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.587516  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:03.587524  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:03.587601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:03.628171  433881 cri.go:89] found id: ""
	I0408 12:49:03.628202  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.628211  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:03.628217  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:03.628284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:03.663630  433881 cri.go:89] found id: ""
	I0408 12:49:03.663661  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.663672  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:03.663680  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:03.663762  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:03.704078  433881 cri.go:89] found id: ""
	I0408 12:49:03.704112  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.704124  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:03.704134  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:03.704202  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:03.744820  433881 cri.go:89] found id: ""
	I0408 12:49:03.744856  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.744868  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:03.744877  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:03.744945  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:03.785826  433881 cri.go:89] found id: ""
	I0408 12:49:03.785855  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.785868  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:03.785878  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:03.785905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:03.800987  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:03.801019  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:03.882870  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:03.882905  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:03.882924  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:03.967335  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:03.967382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:04.008319  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:04.008348  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:06.562156  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:06.579058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:06.579137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:01.933210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:04.428894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.428974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.606895  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:08.106819  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:07.718023  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.635302  433881 cri.go:89] found id: ""
	I0408 12:49:06.635333  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.635345  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:06.635353  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:06.635422  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:06.696626  433881 cri.go:89] found id: ""
	I0408 12:49:06.696675  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.696692  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:06.696700  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:06.696769  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:06.738555  433881 cri.go:89] found id: ""
	I0408 12:49:06.738589  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.738601  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:06.738610  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:06.738675  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:06.780471  433881 cri.go:89] found id: ""
	I0408 12:49:06.780507  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.780516  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:06.780522  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:06.780587  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:06.823514  433881 cri.go:89] found id: ""
	I0408 12:49:06.823558  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.823571  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:06.823580  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:06.823671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:06.863990  433881 cri.go:89] found id: ""
	I0408 12:49:06.864029  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.864045  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:06.864055  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:06.864123  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:06.905383  433881 cri.go:89] found id: ""
	I0408 12:49:06.905419  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.905432  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:06.905440  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:06.905510  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:06.947761  433881 cri.go:89] found id: ""
	I0408 12:49:06.947792  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.947805  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:06.947814  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:06.947826  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:06.988895  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:06.988930  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:07.043205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:07.043251  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:07.057788  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:07.057823  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:07.137854  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:07.137884  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:07.137903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:09.724678  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:09.739337  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:09.739408  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:09.777803  433881 cri.go:89] found id: ""
	I0408 12:49:09.777837  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.777848  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:09.777857  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:09.777934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:09.818101  433881 cri.go:89] found id: ""
	I0408 12:49:09.818132  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.818144  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:09.818152  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:09.818220  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:09.860148  433881 cri.go:89] found id: ""
	I0408 12:49:09.860186  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.860211  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:09.860218  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:09.860284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:09.899008  433881 cri.go:89] found id: ""
	I0408 12:49:09.899042  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.899054  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:09.899063  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:09.899130  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:09.938235  433881 cri.go:89] found id: ""
	I0408 12:49:09.938270  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.938281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:09.938290  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:09.938361  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:09.977404  433881 cri.go:89] found id: ""
	I0408 12:49:09.977438  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.977447  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:09.977454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:09.977505  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:10.015959  433881 cri.go:89] found id: ""
	I0408 12:49:10.015992  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.016008  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:10.016015  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:10.016083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:10.055723  433881 cri.go:89] found id: ""
	I0408 12:49:10.055753  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.055762  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:10.055771  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:10.055785  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:10.131028  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:10.131061  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:10.131079  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:10.213484  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:10.213528  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:10.261403  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:10.261554  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:10.316130  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:10.316189  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:08.429894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.930925  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.609607  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:13.106296  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.716182  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.717779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.832344  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:12.846324  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:12.846446  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:12.883721  433881 cri.go:89] found id: ""
	I0408 12:49:12.883761  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.883776  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:12.883784  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:12.883850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:12.922869  433881 cri.go:89] found id: ""
	I0408 12:49:12.922903  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.922914  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:12.922923  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:12.922989  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:12.965672  433881 cri.go:89] found id: ""
	I0408 12:49:12.965711  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.965723  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:12.965731  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:12.965804  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:13.005430  433881 cri.go:89] found id: ""
	I0408 12:49:13.005466  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.005479  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:13.005494  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:13.005556  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:13.047068  433881 cri.go:89] found id: ""
	I0408 12:49:13.047095  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.047103  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:13.047110  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:13.047175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:13.085014  433881 cri.go:89] found id: ""
	I0408 12:49:13.085047  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.085058  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:13.085067  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:13.085134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:13.122582  433881 cri.go:89] found id: ""
	I0408 12:49:13.122621  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.122633  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:13.122643  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:13.122707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:13.159159  433881 cri.go:89] found id: ""
	I0408 12:49:13.159190  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.159199  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:13.159209  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:13.159221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:13.211508  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:13.211553  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:13.228228  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:13.228265  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:13.306379  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:13.306419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:13.306437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:13.383403  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:13.383462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:15.933673  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:15.947963  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:15.948039  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:15.988497  433881 cri.go:89] found id: ""
	I0408 12:49:15.988526  433881 logs.go:276] 0 containers: []
	W0408 12:49:15.988534  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:15.988541  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:15.988605  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:16.026695  433881 cri.go:89] found id: ""
	I0408 12:49:16.026733  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.026758  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:16.026766  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:16.026850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:16.072415  433881 cri.go:89] found id: ""
	I0408 12:49:16.072452  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.072487  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:16.072498  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:16.072576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:16.111534  433881 cri.go:89] found id: ""
	I0408 12:49:16.111563  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.111575  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:16.111583  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:16.111653  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:16.151515  433881 cri.go:89] found id: ""
	I0408 12:49:16.151550  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.151562  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:16.151572  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:16.151640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:16.189055  433881 cri.go:89] found id: ""
	I0408 12:49:16.189085  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.189094  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:16.189101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:16.189153  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:16.226759  433881 cri.go:89] found id: ""
	I0408 12:49:16.226790  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.226800  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:16.226807  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:16.226860  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:16.269035  433881 cri.go:89] found id: ""
	I0408 12:49:16.269068  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.269079  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:16.269092  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:16.269110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:16.322426  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:16.322472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:16.337670  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:16.337704  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:16.422746  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:16.422777  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:16.422795  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:16.508089  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:16.508140  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:12.931911  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.933011  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:15.607174  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:18.106346  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:17.216822  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.216874  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.055162  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:19.069970  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:19.070044  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:19.110031  433881 cri.go:89] found id: ""
	I0408 12:49:19.110062  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.110070  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:19.110077  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:19.110125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:19.150644  433881 cri.go:89] found id: ""
	I0408 12:49:19.150681  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.150693  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:19.150702  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:19.150770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:19.193032  433881 cri.go:89] found id: ""
	I0408 12:49:19.193064  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.193076  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:19.193084  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:19.193157  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:19.230634  433881 cri.go:89] found id: ""
	I0408 12:49:19.230661  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.230670  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:19.230676  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:19.230727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:19.269083  433881 cri.go:89] found id: ""
	I0408 12:49:19.269116  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.269125  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:19.269132  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:19.269183  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:19.309072  433881 cri.go:89] found id: ""
	I0408 12:49:19.309105  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.309117  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:19.309126  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:19.309208  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:19.349582  433881 cri.go:89] found id: ""
	I0408 12:49:19.349613  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.349622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:19.349633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:19.349687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:19.388015  433881 cri.go:89] found id: ""
	I0408 12:49:19.388046  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.388053  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:19.388062  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:19.388076  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:19.469726  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:19.469750  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:19.469766  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:19.551098  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:19.551138  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:19.595343  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:19.595377  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:19.655983  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:19.656031  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:17.429653  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.432135  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:20.609415  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.105576  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:25.106666  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:21.217932  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.720613  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:22.172109  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:22.187123  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:22.187197  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:22.227242  433881 cri.go:89] found id: ""
	I0408 12:49:22.227269  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.227277  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:22.227283  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:22.227344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:22.266238  433881 cri.go:89] found id: ""
	I0408 12:49:22.266270  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.266279  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:22.266285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:22.266345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:22.304245  433881 cri.go:89] found id: ""
	I0408 12:49:22.304273  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.304281  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:22.304288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:22.304344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:22.348994  433881 cri.go:89] found id: ""
	I0408 12:49:22.349035  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.349048  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:22.349058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:22.349134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:22.389590  433881 cri.go:89] found id: ""
	I0408 12:49:22.389622  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.389631  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:22.389638  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:22.389708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:22.425775  433881 cri.go:89] found id: ""
	I0408 12:49:22.425809  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.425821  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:22.425830  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:22.425898  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:22.468155  433881 cri.go:89] found id: ""
	I0408 12:49:22.468184  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.468192  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:22.468198  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:22.468250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:22.507866  433881 cri.go:89] found id: ""
	I0408 12:49:22.507906  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.507915  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:22.507934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:22.507953  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:22.559847  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:22.559893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:22.575153  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:22.575188  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:22.656324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:22.656354  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:22.656372  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:22.737542  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:22.737589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.282655  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:25.296701  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:25.296770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:25.337101  433881 cri.go:89] found id: ""
	I0408 12:49:25.337141  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.337152  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:25.337161  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:25.337228  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:25.376383  433881 cri.go:89] found id: ""
	I0408 12:49:25.376453  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.376467  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:25.376481  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:25.376576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:25.415819  433881 cri.go:89] found id: ""
	I0408 12:49:25.415852  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.415865  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:25.415873  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:25.415941  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:25.457500  433881 cri.go:89] found id: ""
	I0408 12:49:25.457549  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.457560  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:25.457568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:25.457652  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:25.497132  433881 cri.go:89] found id: ""
	I0408 12:49:25.497172  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.497185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:25.497194  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:25.497265  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:25.542721  433881 cri.go:89] found id: ""
	I0408 12:49:25.542754  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.542765  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:25.542773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:25.542842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:25.583815  433881 cri.go:89] found id: ""
	I0408 12:49:25.583858  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.583869  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:25.583876  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:25.583931  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:25.623484  433881 cri.go:89] found id: ""
	I0408 12:49:25.623519  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.623530  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:25.623544  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:25.623562  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.674250  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:25.674286  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:25.735433  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:25.735477  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:25.750760  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:25.750792  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:25.830122  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:25.830158  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:25.830192  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:21.929027  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.933879  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.429452  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:27.106798  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:29.605690  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.216525  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.216788  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.217600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.418059  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:28.434568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:28.434627  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.479914  433881 cri.go:89] found id: ""
	I0408 12:49:28.479956  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.479968  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:28.479977  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:28.480052  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:28.526249  433881 cri.go:89] found id: ""
	I0408 12:49:28.526282  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.526305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:28.526314  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:28.526403  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:28.564561  433881 cri.go:89] found id: ""
	I0408 12:49:28.564595  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.564606  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:28.564613  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:28.564666  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:28.606416  433881 cri.go:89] found id: ""
	I0408 12:49:28.606456  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.606469  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:28.606478  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:28.606545  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:28.649847  433881 cri.go:89] found id: ""
	I0408 12:49:28.649880  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.649915  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:28.649925  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:28.650014  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:28.690548  433881 cri.go:89] found id: ""
	I0408 12:49:28.690587  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.690600  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:28.690609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:28.690681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:28.730123  433881 cri.go:89] found id: ""
	I0408 12:49:28.730159  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.730170  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:28.730179  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:28.730250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:28.771147  433881 cri.go:89] found id: ""
	I0408 12:49:28.771192  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.771205  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:28.771220  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:28.771238  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:28.856250  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:28.856273  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:28.856301  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:28.941925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:28.941982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:29.003853  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:29.003893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:29.057957  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:29.058004  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.573734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:31.588485  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:31.588551  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.433974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.930607  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.606729  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.107220  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:32.218719  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.718165  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.625072  433881 cri.go:89] found id: ""
	I0408 12:49:31.625100  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.625108  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:31.625114  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:31.625175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:31.662716  433881 cri.go:89] found id: ""
	I0408 12:49:31.662752  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.662763  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:31.662772  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:31.662839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:31.701551  433881 cri.go:89] found id: ""
	I0408 12:49:31.701588  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.701596  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:31.701603  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:31.701687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:31.741857  433881 cri.go:89] found id: ""
	I0408 12:49:31.741888  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.741900  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:31.741908  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:31.741973  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:31.782209  433881 cri.go:89] found id: ""
	I0408 12:49:31.782240  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.782252  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:31.782259  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:31.782347  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:31.820207  433881 cri.go:89] found id: ""
	I0408 12:49:31.820261  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.820283  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:31.820297  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:31.820362  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:31.858445  433881 cri.go:89] found id: ""
	I0408 12:49:31.858482  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.858495  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:31.858504  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:31.858580  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:31.899017  433881 cri.go:89] found id: ""
	I0408 12:49:31.899052  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.899070  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:31.899084  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:31.899102  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:31.956200  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:31.956239  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.971940  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:31.971982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:32.049548  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:32.049578  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:32.049596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:32.136320  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:32.136366  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:34.684997  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:34.700097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:34.700185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:34.757577  433881 cri.go:89] found id: ""
	I0408 12:49:34.757669  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.757686  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:34.757696  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:34.757792  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:34.798151  433881 cri.go:89] found id: ""
	I0408 12:49:34.798188  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.798196  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:34.798203  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:34.798266  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:34.835735  433881 cri.go:89] found id: ""
	I0408 12:49:34.835774  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.835786  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:34.835794  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:34.835862  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:34.875311  433881 cri.go:89] found id: ""
	I0408 12:49:34.875345  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.875359  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:34.875368  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:34.875484  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:34.916118  433881 cri.go:89] found id: ""
	I0408 12:49:34.916148  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.916159  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:34.916167  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:34.916233  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:34.961197  433881 cri.go:89] found id: ""
	I0408 12:49:34.961234  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.961246  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:34.961254  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:34.961314  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:34.999553  433881 cri.go:89] found id: ""
	I0408 12:49:34.999590  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.999598  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:34.999606  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:34.999671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:35.038204  433881 cri.go:89] found id: ""
	I0408 12:49:35.038244  433881 logs.go:276] 0 containers: []
	W0408 12:49:35.038254  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:35.038265  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:35.038277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:35.118925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:35.118982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:35.164584  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:35.164631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:35.216654  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:35.216694  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:35.232506  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:35.232544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:35.304615  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:33.429854  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:35.933211  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:36.605433  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:38.606014  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.217818  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:39.717250  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.805529  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:37.821463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:37.821550  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:37.860644  433881 cri.go:89] found id: ""
	I0408 12:49:37.860683  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.860700  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:37.860709  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:37.860781  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:37.899995  433881 cri.go:89] found id: ""
	I0408 12:49:37.900034  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.900042  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:37.900048  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:37.900111  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:37.939562  433881 cri.go:89] found id: ""
	I0408 12:49:37.939584  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.939592  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:37.939599  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:37.939668  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:37.977990  433881 cri.go:89] found id: ""
	I0408 12:49:37.978021  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.978033  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:37.978042  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:37.978113  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:38.014506  433881 cri.go:89] found id: ""
	I0408 12:49:38.014537  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.014551  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:38.014559  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:38.014639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:38.049714  433881 cri.go:89] found id: ""
	I0408 12:49:38.049751  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.049764  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:38.049773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:38.049842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:38.089931  433881 cri.go:89] found id: ""
	I0408 12:49:38.089978  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.089987  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:38.089993  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:38.090058  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:38.127674  433881 cri.go:89] found id: ""
	I0408 12:49:38.127715  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.127727  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:38.127738  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:38.127759  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.144170  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:38.144203  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:38.225864  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:38.225885  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:38.225899  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:38.309289  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:38.309334  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:38.351666  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:38.351724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:40.910064  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:40.926264  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:40.926350  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:40.973110  433881 cri.go:89] found id: ""
	I0408 12:49:40.973138  433881 logs.go:276] 0 containers: []
	W0408 12:49:40.973146  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:40.973152  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:40.973209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:41.014643  433881 cri.go:89] found id: ""
	I0408 12:49:41.014675  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.014688  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:41.014696  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:41.014761  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:41.054414  433881 cri.go:89] found id: ""
	I0408 12:49:41.054446  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.054461  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:41.054469  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:41.054543  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:41.094835  433881 cri.go:89] found id: ""
	I0408 12:49:41.094867  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.094876  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:41.094883  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:41.094943  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:41.153654  433881 cri.go:89] found id: ""
	I0408 12:49:41.153684  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.153693  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:41.153699  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:41.153751  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:41.196170  433881 cri.go:89] found id: ""
	I0408 12:49:41.196198  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.196209  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:41.196215  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:41.196277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:41.261374  433881 cri.go:89] found id: ""
	I0408 12:49:41.261412  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.261423  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:41.261432  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:41.261500  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:41.300491  433881 cri.go:89] found id: ""
	I0408 12:49:41.300523  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.300532  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:41.300546  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:41.300559  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:41.373813  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:41.373843  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:41.373860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:41.449773  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:41.449819  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:41.498826  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:41.498862  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:41.552736  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:41.552780  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.431584  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:40.930328  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.106567  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:43.606770  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.718244  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.218855  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.068653  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:44.083655  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:44.083756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:44.124068  433881 cri.go:89] found id: ""
	I0408 12:49:44.124101  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.124113  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:44.124122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:44.124193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:44.160898  433881 cri.go:89] found id: ""
	I0408 12:49:44.160936  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.160950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:44.160958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:44.161032  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:44.196503  433881 cri.go:89] found id: ""
	I0408 12:49:44.196532  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.196540  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:44.196547  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:44.196611  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:44.234604  433881 cri.go:89] found id: ""
	I0408 12:49:44.234644  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.234656  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:44.234664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:44.234720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:44.271243  433881 cri.go:89] found id: ""
	I0408 12:49:44.271283  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.271297  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:44.271306  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:44.271369  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:44.308504  433881 cri.go:89] found id: ""
	I0408 12:49:44.308543  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.308571  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:44.308581  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:44.308644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:44.345662  433881 cri.go:89] found id: ""
	I0408 12:49:44.345703  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.345716  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:44.345725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:44.345786  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:44.384785  433881 cri.go:89] found id: ""
	I0408 12:49:44.384816  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.384826  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:44.384845  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:44.384863  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:44.429253  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:44.429283  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:44.485160  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:44.485201  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:44.502996  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:44.503033  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:44.581921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:44.581946  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:44.581964  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:43.428915  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:45.430859  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.106078  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.108320  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.718065  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.721772  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:47.167101  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:47.183406  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:47.183475  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:47.244266  433881 cri.go:89] found id: ""
	I0408 12:49:47.244295  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.244306  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:47.244314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:47.244379  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:47.285538  433881 cri.go:89] found id: ""
	I0408 12:49:47.285575  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.285588  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:47.285597  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:47.285673  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:47.323634  433881 cri.go:89] found id: ""
	I0408 12:49:47.323670  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.323679  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:47.323707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:47.323791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:47.362737  433881 cri.go:89] found id: ""
	I0408 12:49:47.362774  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.362787  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:47.362795  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:47.362856  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:47.403914  433881 cri.go:89] found id: ""
	I0408 12:49:47.403947  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.403958  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:47.403967  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:47.404035  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:47.445470  433881 cri.go:89] found id: ""
	I0408 12:49:47.445506  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.445521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:47.445530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:47.445598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:47.482633  433881 cri.go:89] found id: ""
	I0408 12:49:47.482669  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.482680  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:47.482689  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:47.482760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:47.521404  433881 cri.go:89] found id: ""
	I0408 12:49:47.521441  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.521456  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:47.521469  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:47.521486  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:47.597247  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:47.597270  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:47.597284  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:47.678765  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:47.678805  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.721463  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:47.721502  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:47.780430  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:47.780472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.295320  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:50.312212  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:50.312293  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:50.355987  433881 cri.go:89] found id: ""
	I0408 12:49:50.356022  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.356034  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:50.356043  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:50.356118  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:50.399662  433881 cri.go:89] found id: ""
	I0408 12:49:50.399714  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.399726  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:50.399735  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:50.399798  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:50.441718  433881 cri.go:89] found id: ""
	I0408 12:49:50.441753  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.441764  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:50.441773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:50.441846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:50.485588  433881 cri.go:89] found id: ""
	I0408 12:49:50.485624  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.485634  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:50.485641  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:50.485703  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:50.524897  433881 cri.go:89] found id: ""
	I0408 12:49:50.524929  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.524937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:50.524943  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:50.524998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:50.561337  433881 cri.go:89] found id: ""
	I0408 12:49:50.561378  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.561388  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:50.561396  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:50.561468  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:50.603052  433881 cri.go:89] found id: ""
	I0408 12:49:50.603082  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.603092  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:50.603101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:50.603169  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:50.643514  433881 cri.go:89] found id: ""
	I0408 12:49:50.643555  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.643566  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:50.643576  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:50.643589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:50.697346  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:50.697388  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.711982  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:50.712015  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:50.796665  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:50.796711  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:50.796731  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:50.873396  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:50.873438  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.432167  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:49.929922  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:50.606575  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.106564  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:51.217123  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.217785  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.217941  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.421458  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:53.435909  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:53.435975  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:53.478018  433881 cri.go:89] found id: ""
	I0408 12:49:53.478052  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.478063  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:53.478072  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:53.478138  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:53.518890  433881 cri.go:89] found id: ""
	I0408 12:49:53.518936  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.518950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:53.518958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:53.519047  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:53.554912  433881 cri.go:89] found id: ""
	I0408 12:49:53.554952  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.554964  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:53.554972  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:53.555042  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:53.592991  433881 cri.go:89] found id: ""
	I0408 12:49:53.593019  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.593028  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:53.593033  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:53.593088  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:53.631215  433881 cri.go:89] found id: ""
	I0408 12:49:53.631255  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.631269  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:53.631277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:53.631351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:53.669189  433881 cri.go:89] found id: ""
	I0408 12:49:53.669228  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.669248  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:53.669258  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:53.669322  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:53.709315  433881 cri.go:89] found id: ""
	I0408 12:49:53.709344  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.709353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:53.709359  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:53.709421  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:53.750869  433881 cri.go:89] found id: ""
	I0408 12:49:53.750910  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.750922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:53.750934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:53.750951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:53.802734  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:53.802782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:53.819509  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:53.819546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:53.888733  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:53.888761  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:53.888782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:53.972408  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:53.972448  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:56.517173  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:56.532357  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:56.532427  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:56.574068  433881 cri.go:89] found id: ""
	I0408 12:49:56.574109  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.574118  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:56.574129  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:56.574276  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:52.429230  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:54.929643  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.607214  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:58.109657  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:57.717805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.219041  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:56.616853  433881 cri.go:89] found id: ""
	I0408 12:49:56.616885  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.616906  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:56.616915  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:56.616988  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:56.659097  433881 cri.go:89] found id: ""
	I0408 12:49:56.659125  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.659133  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:56.659139  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:56.659190  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:56.699222  433881 cri.go:89] found id: ""
	I0408 12:49:56.699262  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.699274  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:56.699283  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:56.699345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:56.747017  433881 cri.go:89] found id: ""
	I0408 12:49:56.747055  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.747068  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:56.747076  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:56.747149  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:56.784988  433881 cri.go:89] found id: ""
	I0408 12:49:56.785028  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.785042  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:56.785058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:56.785126  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:56.830280  433881 cri.go:89] found id: ""
	I0408 12:49:56.830320  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.830332  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:56.830340  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:56.830410  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:56.868643  433881 cri.go:89] found id: ""
	I0408 12:49:56.868678  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.868686  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:56.868697  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:56.868713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:56.922497  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:56.922542  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:56.940550  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:56.940596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:57.018640  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:57.018665  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:57.018680  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.096626  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:57.096681  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:59.638585  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:59.652384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:59.652466  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:59.692778  433881 cri.go:89] found id: ""
	I0408 12:49:59.692823  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.692837  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:59.692846  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:59.692906  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:59.732896  433881 cri.go:89] found id: ""
	I0408 12:49:59.732923  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.732933  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:59.732940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:59.732999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:59.774774  433881 cri.go:89] found id: ""
	I0408 12:49:59.774806  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.774814  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:59.774819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:59.774870  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:59.812919  433881 cri.go:89] found id: ""
	I0408 12:49:59.812959  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.812972  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:59.812980  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:59.813043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:59.848653  433881 cri.go:89] found id: ""
	I0408 12:49:59.848684  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.848695  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:59.848703  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:59.848772  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:59.883495  433881 cri.go:89] found id: ""
	I0408 12:49:59.883525  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.883537  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:59.883546  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:59.883625  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:59.925080  433881 cri.go:89] found id: ""
	I0408 12:49:59.925113  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.925122  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:59.925129  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:59.925182  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:59.967101  433881 cri.go:89] found id: ""
	I0408 12:49:59.967130  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.967142  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:59.967152  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:59.967163  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:00.010507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:00.010546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:00.063139  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:00.063182  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:00.079229  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:00.079266  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:00.155202  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:00.155235  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:00.155253  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.430097  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:59.430226  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.605915  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:03.106990  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.717304  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.717757  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.738934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:02.752509  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:02.752593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:02.791178  433881 cri.go:89] found id: ""
	I0408 12:50:02.791212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.791222  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:02.791229  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:02.791301  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:02.834180  433881 cri.go:89] found id: ""
	I0408 12:50:02.834212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.834225  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:02.834234  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:02.834296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:02.873513  433881 cri.go:89] found id: ""
	I0408 12:50:02.873551  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.873563  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:02.873573  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:02.873651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:02.921329  433881 cri.go:89] found id: ""
	I0408 12:50:02.921371  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.921384  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:02.921392  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:02.921517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:02.959940  433881 cri.go:89] found id: ""
	I0408 12:50:02.959970  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.959980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:02.959988  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:02.960120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:03.001222  433881 cri.go:89] found id: ""
	I0408 12:50:03.001251  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.001259  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:03.001265  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:03.001317  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:03.043627  433881 cri.go:89] found id: ""
	I0408 12:50:03.043656  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.043666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:03.043671  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:03.043750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:03.083603  433881 cri.go:89] found id: ""
	I0408 12:50:03.083640  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.083649  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:03.083660  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:03.083674  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:03.138300  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:03.138343  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:03.153439  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:03.153476  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:03.230230  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:03.230258  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:03.230277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:03.312005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:03.312048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:05.851000  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:05.865533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:05.865601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:05.905449  433881 cri.go:89] found id: ""
	I0408 12:50:05.905485  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.905495  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:05.905501  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:05.905570  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:05.952260  433881 cri.go:89] found id: ""
	I0408 12:50:05.952293  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.952305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:05.952313  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:05.952384  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:05.993398  433881 cri.go:89] found id: ""
	I0408 12:50:05.993430  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.993440  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:05.993446  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:05.993512  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:06.031484  433881 cri.go:89] found id: ""
	I0408 12:50:06.031527  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.031539  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:06.031551  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:06.031613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:06.067855  433881 cri.go:89] found id: ""
	I0408 12:50:06.067897  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.067910  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:06.067920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:06.067992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:06.108905  433881 cri.go:89] found id: ""
	I0408 12:50:06.108937  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.108949  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:06.108958  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:06.109010  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:06.147629  433881 cri.go:89] found id: ""
	I0408 12:50:06.147664  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.147674  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:06.147683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:06.147760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:06.184250  433881 cri.go:89] found id: ""
	I0408 12:50:06.184287  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.184298  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:06.184312  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:06.184329  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:06.239560  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:06.239606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:06.254746  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:06.254777  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:06.330423  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:06.330453  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:06.330471  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:06.410965  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:06.411017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:01.930407  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.429884  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:06.430557  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:05.605804  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.606737  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:10.107370  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.218275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:09.716548  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:08.958108  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:08.972557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:08.972626  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:09.026034  433881 cri.go:89] found id: ""
	I0408 12:50:09.026073  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.026081  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:09.026094  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:09.026145  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:09.063360  433881 cri.go:89] found id: ""
	I0408 12:50:09.063399  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.063411  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:09.063420  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:09.063509  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:09.101002  433881 cri.go:89] found id: ""
	I0408 12:50:09.101030  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.101039  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:09.101045  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:09.101104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:09.140794  433881 cri.go:89] found id: ""
	I0408 12:50:09.140830  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.140843  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:09.140852  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:09.140912  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:09.176889  433881 cri.go:89] found id: ""
	I0408 12:50:09.176927  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.176939  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:09.176947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:09.177013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:09.218687  433881 cri.go:89] found id: ""
	I0408 12:50:09.218719  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.218730  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:09.218739  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:09.218819  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:09.254509  433881 cri.go:89] found id: ""
	I0408 12:50:09.254542  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.254551  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:09.254557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:09.254619  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:09.291313  433881 cri.go:89] found id: ""
	I0408 12:50:09.291341  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.291349  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:09.291359  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:09.291382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:09.342578  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:09.342625  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:09.359207  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:09.359236  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:09.434921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:09.434945  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:09.434962  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:09.526672  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:09.526726  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:08.930029  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.429317  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.107556  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:14.606578  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.717001  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:13.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.719875  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.075428  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:12.089920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:12.089986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:12.128791  433881 cri.go:89] found id: ""
	I0408 12:50:12.128878  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.128895  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:12.128905  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:12.128979  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:12.166911  433881 cri.go:89] found id: ""
	I0408 12:50:12.166939  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.166947  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:12.166954  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:12.167005  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:12.205798  433881 cri.go:89] found id: ""
	I0408 12:50:12.205830  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.205839  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:12.205847  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:12.205905  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:12.242716  433881 cri.go:89] found id: ""
	I0408 12:50:12.242754  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.242764  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:12.242771  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:12.242825  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:12.279061  433881 cri.go:89] found id: ""
	I0408 12:50:12.279098  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.279109  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:12.279118  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:12.279187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:12.319510  433881 cri.go:89] found id: ""
	I0408 12:50:12.319538  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.319547  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:12.319554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:12.319610  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:12.357578  433881 cri.go:89] found id: ""
	I0408 12:50:12.357613  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.357625  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:12.357634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:12.357699  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:12.402895  433881 cri.go:89] found id: ""
	I0408 12:50:12.402931  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.402944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:12.402958  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:12.402975  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:12.455885  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:12.455929  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:12.472119  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:12.472160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:12.551019  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:12.551041  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:12.551054  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:12.633560  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:12.633606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.176459  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:15.191013  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:15.191083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:15.243825  433881 cri.go:89] found id: ""
	I0408 12:50:15.243852  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.243861  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:15.243867  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:15.243918  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:15.282768  433881 cri.go:89] found id: ""
	I0408 12:50:15.282803  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.282816  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:15.282824  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:15.282893  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:15.318418  433881 cri.go:89] found id: ""
	I0408 12:50:15.318447  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.318455  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:15.318463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:15.318540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:15.354071  433881 cri.go:89] found id: ""
	I0408 12:50:15.354109  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.354125  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:15.354133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:15.354205  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:15.397142  433881 cri.go:89] found id: ""
	I0408 12:50:15.397176  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.397185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:15.397191  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:15.397253  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:15.436798  433881 cri.go:89] found id: ""
	I0408 12:50:15.436832  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.436843  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:15.436851  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:15.436916  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:15.475792  433881 cri.go:89] found id: ""
	I0408 12:50:15.475823  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.475836  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:15.475844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:15.475917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:15.526277  433881 cri.go:89] found id: ""
	I0408 12:50:15.526323  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.526335  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:15.526348  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:15.526365  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:15.601590  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:15.601616  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:15.601631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:15.681784  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:15.681842  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.725300  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:15.725345  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:15.778579  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:15.778627  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:13.429712  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.430255  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:17.106153  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:19.607656  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.217812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.719543  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.296690  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:18.310554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:18.310623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:18.350635  433881 cri.go:89] found id: ""
	I0408 12:50:18.350673  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.350685  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:18.350693  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:18.350756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:18.391943  433881 cri.go:89] found id: ""
	I0408 12:50:18.391974  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.391984  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:18.391990  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:18.392059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:18.433191  433881 cri.go:89] found id: ""
	I0408 12:50:18.433226  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.433237  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:18.433246  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:18.433310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:18.471600  433881 cri.go:89] found id: ""
	I0408 12:50:18.471629  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.471641  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:18.471649  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:18.471737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:18.507180  433881 cri.go:89] found id: ""
	I0408 12:50:18.507219  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.507228  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:18.507242  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:18.507307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:18.553894  433881 cri.go:89] found id: ""
	I0408 12:50:18.553924  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.553939  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:18.553947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:18.554013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:18.593823  433881 cri.go:89] found id: ""
	I0408 12:50:18.593860  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.593870  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:18.593878  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:18.593934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:18.636636  433881 cri.go:89] found id: ""
	I0408 12:50:18.636667  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.636679  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:18.636692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:18.636709  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:18.690597  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:18.690640  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:18.706484  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:18.706537  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:18.795390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:18.795419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:18.795434  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:18.873458  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:18.873518  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:21.420942  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:21.436200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:21.436262  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:21.473194  433881 cri.go:89] found id: ""
	I0408 12:50:21.473228  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.473237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:21.473244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:21.473297  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:21.510496  433881 cri.go:89] found id: ""
	I0408 12:50:21.510534  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.510547  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:21.510556  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:21.510618  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:21.550290  433881 cri.go:89] found id: ""
	I0408 12:50:21.550329  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.550337  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:21.550344  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:21.550399  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:21.586192  433881 cri.go:89] found id: ""
	I0408 12:50:21.586229  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.586241  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:21.586252  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:21.586316  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:17.930126  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.430210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:22.107118  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.107812  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:23.217266  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:25.218476  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:21.645888  433881 cri.go:89] found id: ""
	I0408 12:50:21.645925  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.645937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:21.645945  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:21.646012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:21.710384  433881 cri.go:89] found id: ""
	I0408 12:50:21.710416  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.710429  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:21.710437  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:21.710503  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:21.773423  433881 cri.go:89] found id: ""
	I0408 12:50:21.773458  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.773467  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:21.773473  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:21.773536  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:21.814353  433881 cri.go:89] found id: ""
	I0408 12:50:21.814389  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.814401  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:21.814415  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:21.814437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:21.866744  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:21.866783  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:21.883577  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:21.883617  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:21.963339  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:21.963362  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:21.963379  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:22.044959  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:22.045017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:24.589027  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:24.603707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:24.603797  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:24.648525  433881 cri.go:89] found id: ""
	I0408 12:50:24.648566  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.648579  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:24.648587  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:24.648656  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:24.693788  433881 cri.go:89] found id: ""
	I0408 12:50:24.693827  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.693840  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:24.693850  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:24.693925  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:24.734461  433881 cri.go:89] found id: ""
	I0408 12:50:24.734499  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.734507  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:24.734514  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:24.734578  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:24.781723  433881 cri.go:89] found id: ""
	I0408 12:50:24.781759  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.781772  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:24.781780  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:24.781850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:24.823060  433881 cri.go:89] found id: ""
	I0408 12:50:24.823091  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.823101  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:24.823109  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:24.823195  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:24.858847  433881 cri.go:89] found id: ""
	I0408 12:50:24.858887  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.858899  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:24.858913  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:24.858968  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:24.899075  433881 cri.go:89] found id: ""
	I0408 12:50:24.899113  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.899125  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:24.899133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:24.899216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:24.941839  433881 cri.go:89] found id: ""
	I0408 12:50:24.941876  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.941886  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:24.941897  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:24.941911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:24.993358  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:24.993402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:25.010857  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:25.010892  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:25.098985  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:25.099017  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:25.099035  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:25.179115  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:25.179172  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:22.928804  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.930608  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:26.607216  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:28.608092  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.717812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:30.218079  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.726080  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:27.740646  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:27.740739  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:27.781567  433881 cri.go:89] found id: ""
	I0408 12:50:27.781612  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.781623  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:27.781630  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:27.781696  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:27.823034  433881 cri.go:89] found id: ""
	I0408 12:50:27.823077  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.823090  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:27.823099  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:27.823174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:27.862738  433881 cri.go:89] found id: ""
	I0408 12:50:27.862797  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.862822  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:27.862832  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:27.862917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:27.905821  433881 cri.go:89] found id: ""
	I0408 12:50:27.905862  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.905874  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:27.905884  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:27.905954  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:27.949580  433881 cri.go:89] found id: ""
	I0408 12:50:27.949613  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.949625  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:27.949634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:27.949721  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:27.989453  433881 cri.go:89] found id: ""
	I0408 12:50:27.989488  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.989496  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:27.989502  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:27.989560  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:28.031983  433881 cri.go:89] found id: ""
	I0408 12:50:28.032015  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.032027  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:28.032035  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:28.032114  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:28.072851  433881 cri.go:89] found id: ""
	I0408 12:50:28.072884  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.072895  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:28.072910  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:28.072927  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:28.116117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:28.116160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:28.170098  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:28.170142  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:28.184820  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:28.184860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:28.261324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:28.261355  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:28.261384  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:30.837906  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:30.853871  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:30.853969  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:30.896197  433881 cri.go:89] found id: ""
	I0408 12:50:30.896228  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.896237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:30.896244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:30.896296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:30.938689  433881 cri.go:89] found id: ""
	I0408 12:50:30.938726  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.938740  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:30.938758  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:30.938840  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:30.980883  433881 cri.go:89] found id: ""
	I0408 12:50:30.980918  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.980929  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:30.980937  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:30.981008  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:31.018262  433881 cri.go:89] found id: ""
	I0408 12:50:31.018291  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.018305  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:31.018314  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:31.018382  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:31.055397  433881 cri.go:89] found id: ""
	I0408 12:50:31.055430  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.055443  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:31.055452  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:31.055527  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:31.091476  433881 cri.go:89] found id: ""
	I0408 12:50:31.091511  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.091523  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:31.091531  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:31.091583  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:31.130285  433881 cri.go:89] found id: ""
	I0408 12:50:31.130326  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.130337  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:31.130345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:31.130419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:31.168196  433881 cri.go:89] found id: ""
	I0408 12:50:31.168227  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.168236  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:31.168246  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:31.168258  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:31.220612  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:31.220652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:31.236718  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:31.236754  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:31.310550  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:31.310574  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:31.310588  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:31.387376  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:31.387420  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:27.429985  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:29.928718  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:31.106901  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.606293  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:32.717659  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.217468  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.932307  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:33.946664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:33.946754  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:33.991321  433881 cri.go:89] found id: ""
	I0408 12:50:33.991359  433881 logs.go:276] 0 containers: []
	W0408 12:50:33.991371  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:33.991381  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:33.991451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:34.033989  433881 cri.go:89] found id: ""
	I0408 12:50:34.034024  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.034034  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:34.034041  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:34.034125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:34.081140  433881 cri.go:89] found id: ""
	I0408 12:50:34.081183  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.081192  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:34.081199  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:34.081258  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:34.122332  433881 cri.go:89] found id: ""
	I0408 12:50:34.122365  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.122376  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:34.122384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:34.122451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:34.161307  433881 cri.go:89] found id: ""
	I0408 12:50:34.161353  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.161378  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:34.161387  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:34.161460  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:34.199664  433881 cri.go:89] found id: ""
	I0408 12:50:34.199715  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.199727  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:34.199736  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:34.199816  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:34.242044  433881 cri.go:89] found id: ""
	I0408 12:50:34.242077  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.242087  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:34.242094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:34.242159  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:34.277852  433881 cri.go:89] found id: ""
	I0408 12:50:34.277893  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.277908  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:34.277920  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:34.277940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:34.329572  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:34.329614  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:34.343823  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:34.343854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:34.422625  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:34.422652  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:34.422670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:34.504605  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:34.504653  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:31.928982  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.929758  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.930610  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:36.110235  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:38.606389  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.217645  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:39.218104  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.050790  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:37.065111  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:37.065179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:37.108541  433881 cri.go:89] found id: ""
	I0408 12:50:37.108573  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.108583  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:37.108590  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:37.108655  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:37.145207  433881 cri.go:89] found id: ""
	I0408 12:50:37.145241  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.145256  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:37.145264  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:37.145332  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:37.182788  433881 cri.go:89] found id: ""
	I0408 12:50:37.182823  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.182836  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:37.182844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:37.182917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:37.222780  433881 cri.go:89] found id: ""
	I0408 12:50:37.222804  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.222813  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:37.222819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:37.222884  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:37.261653  433881 cri.go:89] found id: ""
	I0408 12:50:37.261703  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.261715  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:37.261725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:37.261795  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:37.300613  433881 cri.go:89] found id: ""
	I0408 12:50:37.300642  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.300651  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:37.300657  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:37.300720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:37.344252  433881 cri.go:89] found id: ""
	I0408 12:50:37.344289  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.344302  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:37.344311  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:37.344380  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:37.382644  433881 cri.go:89] found id: ""
	I0408 12:50:37.382682  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.382695  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:37.382708  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:37.382725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:37.437205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:37.437248  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:37.451772  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:37.451806  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:37.535578  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:37.535604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:37.535618  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:37.618315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:37.618358  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.160025  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:40.173704  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:40.173770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:40.212527  433881 cri.go:89] found id: ""
	I0408 12:50:40.212564  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.212576  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:40.212584  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:40.212648  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:40.250802  433881 cri.go:89] found id: ""
	I0408 12:50:40.250833  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.250841  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:40.250848  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:40.250910  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:40.292534  433881 cri.go:89] found id: ""
	I0408 12:50:40.292565  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.292576  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:40.292584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:40.292641  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:40.329973  433881 cri.go:89] found id: ""
	I0408 12:50:40.330004  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.330017  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:40.330027  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:40.330083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:40.367351  433881 cri.go:89] found id: ""
	I0408 12:50:40.367381  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.367390  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:40.367397  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:40.367462  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:40.404499  433881 cri.go:89] found id: ""
	I0408 12:50:40.404535  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.404546  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:40.404556  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:40.404624  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:40.448208  433881 cri.go:89] found id: ""
	I0408 12:50:40.448244  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.448254  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:40.448263  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:40.448318  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:40.490191  433881 cri.go:89] found id: ""
	I0408 12:50:40.490225  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.490235  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:40.490246  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:40.490262  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:40.507079  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:40.507119  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:40.584844  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:40.584880  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:40.584905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:40.665416  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:40.665461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.710289  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:40.710331  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:38.429765  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.430575  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.607976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.106175  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:45.107548  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:41.716953  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.717149  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.267848  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:43.283094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:43.283192  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:43.321609  433881 cri.go:89] found id: ""
	I0408 12:50:43.321643  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.321655  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:43.321664  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:43.321732  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:43.361550  433881 cri.go:89] found id: ""
	I0408 12:50:43.361587  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.361599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:43.361608  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:43.361686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:43.398332  433881 cri.go:89] found id: ""
	I0408 12:50:43.398373  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.398386  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:43.398394  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:43.398463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:43.436808  433881 cri.go:89] found id: ""
	I0408 12:50:43.436836  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.436844  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:43.436850  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:43.436901  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:43.475475  433881 cri.go:89] found id: ""
	I0408 12:50:43.475512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.475524  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:43.475533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:43.475600  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:43.515481  433881 cri.go:89] found id: ""
	I0408 12:50:43.515512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.515521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:43.515530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:43.515599  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:43.555358  433881 cri.go:89] found id: ""
	I0408 12:50:43.555388  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.555410  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:43.555420  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:43.555476  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:43.590192  433881 cri.go:89] found id: ""
	I0408 12:50:43.590240  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.590253  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:43.590265  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:43.590281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:43.643642  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:43.643699  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:43.659375  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:43.659405  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:43.739721  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:43.739743  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:43.739760  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:43.821107  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:43.821152  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:46.364937  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:46.378208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:46.378295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:46.415217  433881 cri.go:89] found id: ""
	I0408 12:50:46.415251  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.415263  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:46.415272  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:46.415336  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:46.453886  433881 cri.go:89] found id: ""
	I0408 12:50:46.453921  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.453930  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:46.453936  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:46.453992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:46.491443  433881 cri.go:89] found id: ""
	I0408 12:50:46.491475  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.491488  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:46.491496  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:46.491565  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:46.535815  433881 cri.go:89] found id: ""
	I0408 12:50:46.535845  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.535854  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:46.535860  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:46.535921  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:46.577704  433881 cri.go:89] found id: ""
	I0408 12:50:46.577814  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.577826  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:46.577835  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:46.577915  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:42.928908  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:44.929425  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:47.606676  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.608190  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.217528  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:48.717623  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:50.729538  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.624693  433881 cri.go:89] found id: ""
	I0408 12:50:46.624723  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.624731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:46.624738  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:46.624791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:46.659410  433881 cri.go:89] found id: ""
	I0408 12:50:46.659462  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.659474  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:46.659482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:46.659547  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:46.694881  433881 cri.go:89] found id: ""
	I0408 12:50:46.694912  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.694926  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:46.694937  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:46.694954  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:46.751416  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:46.751464  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:46.767739  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:46.767779  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:46.854021  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:46.854050  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:46.854066  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.937214  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:46.937252  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.479829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:49.494083  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:49.494156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:49.532518  433881 cri.go:89] found id: ""
	I0408 12:50:49.532555  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.532563  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:49.532569  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:49.532632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:49.571054  433881 cri.go:89] found id: ""
	I0408 12:50:49.571086  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.571111  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:49.571119  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:49.571199  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:49.607025  433881 cri.go:89] found id: ""
	I0408 12:50:49.607061  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.607071  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:49.607080  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:49.607156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:49.646890  433881 cri.go:89] found id: ""
	I0408 12:50:49.646921  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.646930  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:49.646939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:49.647009  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:49.688671  433881 cri.go:89] found id: ""
	I0408 12:50:49.688707  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.688719  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:49.688728  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:49.688800  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:49.726687  433881 cri.go:89] found id: ""
	I0408 12:50:49.726724  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.726735  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:49.726741  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:49.726808  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:49.767311  433881 cri.go:89] found id: ""
	I0408 12:50:49.767344  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.767353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:49.767360  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:49.767414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:49.803409  433881 cri.go:89] found id: ""
	I0408 12:50:49.803442  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.803452  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:49.803463  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:49.803478  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.842738  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:49.842767  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:49.895264  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:49.895318  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:49.910300  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:49.910332  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:50.005994  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:50.006031  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:50.006048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.929626  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.429810  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.106861  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.608143  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:53.217707  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:55.718120  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.589266  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:52.603202  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:52.603308  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:52.640493  433881 cri.go:89] found id: ""
	I0408 12:50:52.640525  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.640540  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:52.640550  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:52.640613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:52.680230  433881 cri.go:89] found id: ""
	I0408 12:50:52.680271  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.680284  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:52.680293  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:52.680355  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:52.724048  433881 cri.go:89] found id: ""
	I0408 12:50:52.724084  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.724096  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:52.724104  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:52.724171  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:52.776926  433881 cri.go:89] found id: ""
	I0408 12:50:52.776960  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.776973  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:52.776982  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:52.777059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:52.814738  433881 cri.go:89] found id: ""
	I0408 12:50:52.814770  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.814781  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:52.814788  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:52.814842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:52.854463  433881 cri.go:89] found id: ""
	I0408 12:50:52.854501  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.854511  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:52.854521  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:52.854591  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:52.896180  433881 cri.go:89] found id: ""
	I0408 12:50:52.896209  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.896218  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:52.896224  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:52.896279  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:52.931890  433881 cri.go:89] found id: ""
	I0408 12:50:52.931932  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.931944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:52.931956  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:52.931973  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:53.013345  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:53.013368  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:53.013385  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:53.092792  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:53.092834  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:53.142678  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:53.142713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:53.196378  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:53.196429  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:55.713265  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:55.729253  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:55.729341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:55.772259  433881 cri.go:89] found id: ""
	I0408 12:50:55.772303  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.772317  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:55.772325  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:55.772398  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:55.816146  433881 cri.go:89] found id: ""
	I0408 12:50:55.816178  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.816188  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:55.816194  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:55.816247  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:55.857896  433881 cri.go:89] found id: ""
	I0408 12:50:55.857935  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.857947  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:55.857955  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:55.858025  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:55.896337  433881 cri.go:89] found id: ""
	I0408 12:50:55.896374  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.896386  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:55.896395  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:55.896463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:55.936373  433881 cri.go:89] found id: ""
	I0408 12:50:55.936419  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.936430  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:55.936439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:55.936514  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:55.996751  433881 cri.go:89] found id: ""
	I0408 12:50:55.996782  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.996793  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:55.996802  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:55.996866  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:56.038910  433881 cri.go:89] found id: ""
	I0408 12:50:56.038948  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.038956  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:56.038962  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:56.039018  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:56.078147  433881 cri.go:89] found id: ""
	I0408 12:50:56.078185  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.078195  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:56.078206  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:56.078223  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:56.137679  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:56.137725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:56.153067  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:56.153101  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:56.242398  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:56.242422  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:56.242436  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:56.325353  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:56.325402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:51.929891  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.430216  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:57.106572  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.108219  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.216315  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:00.218162  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.867789  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:58.881570  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:58.881640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:58.918941  433881 cri.go:89] found id: ""
	I0408 12:50:58.918971  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.918980  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:58.918987  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:58.919041  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:58.956339  433881 cri.go:89] found id: ""
	I0408 12:50:58.956375  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.956387  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:58.956395  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:58.956448  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:58.998045  433881 cri.go:89] found id: ""
	I0408 12:50:58.998075  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.998087  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:58.998113  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:58.998186  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:59.037694  433881 cri.go:89] found id: ""
	I0408 12:50:59.037724  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.037736  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:59.037744  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:59.037813  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:59.079404  433881 cri.go:89] found id: ""
	I0408 12:50:59.079436  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.079448  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:59.079458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:59.079525  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:59.117535  433881 cri.go:89] found id: ""
	I0408 12:50:59.117566  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.117585  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:59.117593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:59.117661  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:59.163144  433881 cri.go:89] found id: ""
	I0408 12:50:59.163177  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.163190  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:59.163200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:59.163295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:59.201778  433881 cri.go:89] found id: ""
	I0408 12:50:59.201815  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.201827  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:59.201840  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:59.201857  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:59.256688  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:59.256730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:59.272631  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:59.272670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:59.345194  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:59.345219  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:59.345233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:59.420807  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:59.420873  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:56.931254  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.429578  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.606793  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.105581  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:02.218796  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.718232  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.966779  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:01.992790  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:01.992868  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:02.032532  433881 cri.go:89] found id: ""
	I0408 12:51:02.032580  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.032592  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:02.032603  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:02.032684  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:02.070377  433881 cri.go:89] found id: ""
	I0408 12:51:02.070405  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.070412  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:02.070418  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:02.070481  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:02.109543  433881 cri.go:89] found id: ""
	I0408 12:51:02.109569  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.109577  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:02.109584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:02.109639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:02.148009  433881 cri.go:89] found id: ""
	I0408 12:51:02.148049  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.148062  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:02.148078  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:02.148144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:02.184318  433881 cri.go:89] found id: ""
	I0408 12:51:02.184351  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.184362  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:02.184371  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:02.184469  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:02.225491  433881 cri.go:89] found id: ""
	I0408 12:51:02.225534  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.225545  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:02.225554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:02.225628  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:02.269401  433881 cri.go:89] found id: ""
	I0408 12:51:02.269439  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.269447  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:02.269454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:02.269517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:02.310153  433881 cri.go:89] found id: ""
	I0408 12:51:02.310189  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.310197  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:02.310209  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:02.310224  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:02.326077  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:02.326111  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:02.402369  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:02.402394  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:02.402410  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:02.483819  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:02.483866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:02.527581  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:02.527628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:05.083167  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:05.097986  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:05.098063  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:05.139396  433881 cri.go:89] found id: ""
	I0408 12:51:05.139434  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.139446  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:05.139464  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:05.139568  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:05.176882  433881 cri.go:89] found id: ""
	I0408 12:51:05.176918  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.176931  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:05.176940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:05.177012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:05.216426  433881 cri.go:89] found id: ""
	I0408 12:51:05.216459  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.216478  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:05.216486  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:05.216598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:05.254724  433881 cri.go:89] found id: ""
	I0408 12:51:05.254754  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.254762  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:05.254768  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:05.254821  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:05.291361  433881 cri.go:89] found id: ""
	I0408 12:51:05.291388  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.291397  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:05.291403  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:05.291453  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:05.329102  433881 cri.go:89] found id: ""
	I0408 12:51:05.329134  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.329145  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:05.329152  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:05.329216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:05.368614  433881 cri.go:89] found id: ""
	I0408 12:51:05.368657  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.368666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:05.368674  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:05.368727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:05.412151  433881 cri.go:89] found id: ""
	I0408 12:51:05.412182  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.412196  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:05.412208  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:05.412227  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:05.428329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:05.428364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:05.509452  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:05.509481  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:05.509500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:05.586831  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:05.586882  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:05.636175  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:05.636213  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:01.929336  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:03.929754  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.429604  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.106159  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.608247  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:07.216779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:09.217275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.189786  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:08.205609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:08.205686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:08.256556  433881 cri.go:89] found id: ""
	I0408 12:51:08.256586  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.256597  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:08.256607  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:08.256664  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:08.309126  433881 cri.go:89] found id: ""
	I0408 12:51:08.309163  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.309176  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:08.309184  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:08.309259  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:08.350669  433881 cri.go:89] found id: ""
	I0408 12:51:08.350699  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.350708  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:08.350716  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:08.350766  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:08.392122  433881 cri.go:89] found id: ""
	I0408 12:51:08.392156  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.392164  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:08.392171  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:08.392223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:08.435571  433881 cri.go:89] found id: ""
	I0408 12:51:08.435603  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.435616  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:08.435624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:08.435708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.474285  433881 cri.go:89] found id: ""
	I0408 12:51:08.474322  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.474334  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:08.474345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:08.474419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:08.521060  433881 cri.go:89] found id: ""
	I0408 12:51:08.521101  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.521109  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:08.521116  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:08.521185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:08.559967  433881 cri.go:89] found id: ""
	I0408 12:51:08.560013  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.560026  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:08.560051  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:08.560068  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:08.614926  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:08.614966  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:08.639012  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:08.639059  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:08.755572  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:08.755604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:08.755621  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:08.836005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:08.836050  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:11.383048  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:11.397692  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:11.397763  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:11.439445  433881 cri.go:89] found id: ""
	I0408 12:51:11.439482  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.439494  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:11.439503  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:11.439558  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:11.478262  433881 cri.go:89] found id: ""
	I0408 12:51:11.478297  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.478309  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:11.478318  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:11.478392  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:11.518012  433881 cri.go:89] found id: ""
	I0408 12:51:11.518049  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.518063  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:11.518071  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:11.518137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:11.557519  433881 cri.go:89] found id: ""
	I0408 12:51:11.557551  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.557563  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:11.557571  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:11.557644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:11.595494  433881 cri.go:89] found id: ""
	I0408 12:51:11.595528  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.595541  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:11.595550  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:11.595622  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.929238  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:10.929862  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.107603  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.611978  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.718498  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.635667  433881 cri.go:89] found id: ""
	I0408 12:51:11.635719  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.635731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:11.635740  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:11.635806  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:11.675521  433881 cri.go:89] found id: ""
	I0408 12:51:11.675553  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.675562  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:11.675568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:11.675623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:11.720983  433881 cri.go:89] found id: ""
	I0408 12:51:11.721016  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.721029  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:11.721041  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:11.721055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:11.775418  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:11.775462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:11.790019  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:11.790061  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:11.867479  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:11.867512  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:11.867530  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:11.944546  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:11.944594  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:14.487829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:14.501277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:14.501356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:14.539996  433881 cri.go:89] found id: ""
	I0408 12:51:14.540031  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.540043  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:14.540054  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:14.540125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:14.580611  433881 cri.go:89] found id: ""
	I0408 12:51:14.580646  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.580658  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:14.580667  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:14.580729  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:14.623459  433881 cri.go:89] found id: ""
	I0408 12:51:14.623497  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.623509  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:14.623518  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:14.623593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:14.666904  433881 cri.go:89] found id: ""
	I0408 12:51:14.666944  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.666953  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:14.666959  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:14.667012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:14.709136  433881 cri.go:89] found id: ""
	I0408 12:51:14.709169  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.709178  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:14.709183  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:14.709234  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:14.757342  433881 cri.go:89] found id: ""
	I0408 12:51:14.757377  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.757390  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:14.757402  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:14.757477  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:14.795210  433881 cri.go:89] found id: ""
	I0408 12:51:14.795249  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.795262  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:14.795271  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:14.795329  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:14.833782  433881 cri.go:89] found id: ""
	I0408 12:51:14.833813  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.833821  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:14.833831  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:14.833843  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:14.892985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:14.893030  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:14.909567  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:14.909615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:14.988447  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:14.988473  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:14.988494  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:15.068404  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:15.068446  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:12.931867  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:15.430299  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.106552  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.106622  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.108053  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.217595  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.217758  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.220115  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:17.617145  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:17.630439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:17.630520  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:17.672814  433881 cri.go:89] found id: ""
	I0408 12:51:17.672845  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.672853  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:17.672860  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:17.672936  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:17.715344  433881 cri.go:89] found id: ""
	I0408 12:51:17.715378  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.715391  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:17.715399  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:17.715464  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:17.757246  433881 cri.go:89] found id: ""
	I0408 12:51:17.757283  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.757295  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:17.757304  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:17.757373  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:17.798201  433881 cri.go:89] found id: ""
	I0408 12:51:17.798236  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.798245  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:17.798250  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:17.798312  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:17.838243  433881 cri.go:89] found id: ""
	I0408 12:51:17.838280  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.838296  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:17.838305  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:17.838376  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:17.877394  433881 cri.go:89] found id: ""
	I0408 12:51:17.877433  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.877446  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:17.877455  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:17.877522  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:17.917513  433881 cri.go:89] found id: ""
	I0408 12:51:17.917546  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.917557  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:17.917564  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:17.917631  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:17.959806  433881 cri.go:89] found id: ""
	I0408 12:51:17.959841  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.959854  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:17.959872  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:17.959888  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:17.974835  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:17.974866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:18.051066  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:18.051096  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:18.051110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:18.130246  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:18.130294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:18.177977  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:18.178009  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:20.732943  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:20.747177  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:20.747250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:20.793434  433881 cri.go:89] found id: ""
	I0408 12:51:20.793462  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.793472  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:20.793478  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:20.793554  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:20.830880  433881 cri.go:89] found id: ""
	I0408 12:51:20.830915  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.830925  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:20.830931  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:20.830986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:20.865660  433881 cri.go:89] found id: ""
	I0408 12:51:20.865698  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.865710  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:20.865718  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:20.865787  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:20.905977  433881 cri.go:89] found id: ""
	I0408 12:51:20.906009  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.906018  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:20.906023  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:20.906078  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:20.949244  433881 cri.go:89] found id: ""
	I0408 12:51:20.949273  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.949281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:20.949288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:20.949346  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:20.987438  433881 cri.go:89] found id: ""
	I0408 12:51:20.987466  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.987475  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:20.987482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:20.987534  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:21.028061  433881 cri.go:89] found id: ""
	I0408 12:51:21.028106  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.028123  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:21.028130  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:21.028187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:21.065115  433881 cri.go:89] found id: ""
	I0408 12:51:21.065147  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.065160  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:21.065171  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:21.065186  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:21.142100  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:21.142143  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:21.186259  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:21.186294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:21.242038  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:21.242085  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:21.257483  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:21.257526  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:21.336027  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:17.930896  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.430609  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.108741  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.605215  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.716480  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.720217  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:23.836494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:23.850931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:23.851001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:23.889352  433881 cri.go:89] found id: ""
	I0408 12:51:23.889385  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.889397  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:23.889406  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:23.889467  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:23.925240  433881 cri.go:89] found id: ""
	I0408 12:51:23.925271  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.925280  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:23.925286  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:23.925341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:23.965369  433881 cri.go:89] found id: ""
	I0408 12:51:23.965398  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.965410  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:23.965417  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:23.965478  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:24.004828  433881 cri.go:89] found id: ""
	I0408 12:51:24.004864  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.004875  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:24.004882  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:24.004955  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:24.046959  433881 cri.go:89] found id: ""
	I0408 12:51:24.046996  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.047013  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:24.047022  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:24.047104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:24.085408  433881 cri.go:89] found id: ""
	I0408 12:51:24.085447  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.085459  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:24.085468  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:24.085533  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:24.124156  433881 cri.go:89] found id: ""
	I0408 12:51:24.124193  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.124205  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:24.124214  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:24.124280  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:24.159973  433881 cri.go:89] found id: ""
	I0408 12:51:24.160011  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.160023  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:24.160037  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:24.160055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:24.238998  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:24.239047  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:24.282401  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:24.282439  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:24.339279  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:24.339328  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:24.354927  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:24.354965  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:24.432192  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:22.929962  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:25.430340  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.605294  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:28.606623  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:27.218727  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.716524  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.932361  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:26.947709  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:26.947779  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:26.992251  433881 cri.go:89] found id: ""
	I0408 12:51:26.992282  433881 logs.go:276] 0 containers: []
	W0408 12:51:26.992290  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:26.992297  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:26.992366  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:27.033517  433881 cri.go:89] found id: ""
	I0408 12:51:27.033548  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.033560  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:27.033568  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:27.033635  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:27.072593  433881 cri.go:89] found id: ""
	I0408 12:51:27.072628  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.072641  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:27.072650  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:27.072726  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:27.115728  433881 cri.go:89] found id: ""
	I0408 12:51:27.115761  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.115771  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:27.115779  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:27.115846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:27.154218  433881 cri.go:89] found id: ""
	I0408 12:51:27.154254  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.154266  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:27.154274  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:27.154341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:27.193084  433881 cri.go:89] found id: ""
	I0408 12:51:27.193118  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.193134  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:27.193142  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:27.193216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:27.233401  433881 cri.go:89] found id: ""
	I0408 12:51:27.233436  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.233449  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:27.233458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:27.233524  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:27.274272  433881 cri.go:89] found id: ""
	I0408 12:51:27.274307  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.274316  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:27.274325  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:27.274339  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:27.316918  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:27.316956  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:27.371970  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:27.372014  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.387640  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:27.387679  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:27.468583  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:27.468611  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:27.468628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.049078  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:30.063661  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:30.063750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:30.102000  433881 cri.go:89] found id: ""
	I0408 12:51:30.102031  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.102049  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:30.102058  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:30.102120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:30.144972  433881 cri.go:89] found id: ""
	I0408 12:51:30.145001  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.145010  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:30.145017  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:30.145076  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:30.185179  433881 cri.go:89] found id: ""
	I0408 12:51:30.185250  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.185274  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:30.185284  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:30.185356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:30.224138  433881 cri.go:89] found id: ""
	I0408 12:51:30.224169  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.224178  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:30.224185  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:30.224245  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:30.262754  433881 cri.go:89] found id: ""
	I0408 12:51:30.262788  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.262800  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:30.262809  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:30.262872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:30.296574  433881 cri.go:89] found id: ""
	I0408 12:51:30.296608  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.296617  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:30.296624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:30.296685  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:30.337619  433881 cri.go:89] found id: ""
	I0408 12:51:30.337653  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.337665  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:30.337672  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:30.337737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:30.378808  433881 cri.go:89] found id: ""
	I0408 12:51:30.378837  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.378849  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:30.378860  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:30.378876  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:30.462867  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:30.462895  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:30.462911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.549824  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:30.549871  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:30.594270  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:30.594302  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:30.650199  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:30.650247  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.430647  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.929105  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:30.607227  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.106814  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.106890  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:31.716747  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.718962  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.166177  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:33.181168  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:33.181277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:33.220931  433881 cri.go:89] found id: ""
	I0408 12:51:33.220960  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.220970  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:33.220976  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:33.221043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:33.267118  433881 cri.go:89] found id: ""
	I0408 12:51:33.267155  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.267168  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:33.267177  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:33.267250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:33.308486  433881 cri.go:89] found id: ""
	I0408 12:51:33.308522  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.308532  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:33.308540  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:33.308614  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:33.344735  433881 cri.go:89] found id: ""
	I0408 12:51:33.344773  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.344785  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:33.344793  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:33.344857  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:33.384130  433881 cri.go:89] found id: ""
	I0408 12:51:33.384162  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.384175  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:33.384184  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:33.384246  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:33.422187  433881 cri.go:89] found id: ""
	I0408 12:51:33.422224  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.422236  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:33.422244  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:33.422309  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:33.462281  433881 cri.go:89] found id: ""
	I0408 12:51:33.462310  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.462320  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:33.462326  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:33.462412  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:33.501273  433881 cri.go:89] found id: ""
	I0408 12:51:33.501304  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.501315  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:33.501329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:33.501347  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:33.573407  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:33.573435  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:33.573453  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:33.659573  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:33.659615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:33.712568  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:33.712600  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:33.769457  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:33.769500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.285759  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:36.302490  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:36.302576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:36.341170  433881 cri.go:89] found id: ""
	I0408 12:51:36.341204  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.341218  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:36.341227  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:36.341296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:36.380366  433881 cri.go:89] found id: ""
	I0408 12:51:36.380395  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.380403  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:36.380411  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:36.380485  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:36.428755  433881 cri.go:89] found id: ""
	I0408 12:51:36.428786  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.428795  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:36.428801  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:36.428852  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:36.473849  433881 cri.go:89] found id: ""
	I0408 12:51:36.473893  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.473921  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:36.473930  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:36.474001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:36.513922  433881 cri.go:89] found id: ""
	I0408 12:51:36.513967  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.513980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:36.513989  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:36.514059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:36.557731  433881 cri.go:89] found id: ""
	I0408 12:51:36.557768  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.557777  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:36.557784  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:36.557861  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:36.601978  433881 cri.go:89] found id: ""
	I0408 12:51:36.602010  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.602020  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:36.602031  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:36.602099  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:31.930145  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.931893  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.932546  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:37.606783  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:39.607738  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.217708  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:38.717067  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.721801  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.645189  433881 cri.go:89] found id: ""
	I0408 12:51:36.645226  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.645244  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:36.645257  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:36.645276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:36.739293  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:36.739346  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:36.786962  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:36.787001  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:36.842456  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:36.842499  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.857848  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:36.857883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:36.939227  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:39.440047  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:39.456206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:39.456304  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:39.497752  433881 cri.go:89] found id: ""
	I0408 12:51:39.497792  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.497804  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:39.497815  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:39.497882  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:39.536192  433881 cri.go:89] found id: ""
	I0408 12:51:39.536224  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.536237  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:39.536245  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:39.536315  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:39.573874  433881 cri.go:89] found id: ""
	I0408 12:51:39.573917  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.573932  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:39.573939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:39.574004  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:39.614525  433881 cri.go:89] found id: ""
	I0408 12:51:39.614562  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.614577  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:39.614585  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:39.614651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:39.654414  433881 cri.go:89] found id: ""
	I0408 12:51:39.654455  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.654467  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:39.654476  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:39.654549  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:39.691814  433881 cri.go:89] found id: ""
	I0408 12:51:39.691847  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.691860  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:39.691868  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:39.691939  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:39.735572  433881 cri.go:89] found id: ""
	I0408 12:51:39.735609  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.735622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:39.735630  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:39.735707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:39.778827  433881 cri.go:89] found id: ""
	I0408 12:51:39.778860  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.778870  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:39.778881  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:39.778894  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:39.857861  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:39.857903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:39.901597  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:39.901652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:39.955660  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:39.955730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:39.972424  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:39.972461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:40.052884  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:38.429490  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.932035  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:42.106879  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:44.607134  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:41.210350  433557 pod_ready.go:81] duration metric: took 4m0.000311819s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:41.210399  433557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0408 12:51:41.210413  433557 pod_ready.go:38] duration metric: took 4m3.201150727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:41.210464  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:51:41.210520  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:41.210591  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:41.269963  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:41.269999  433557 cri.go:89] found id: ""
	I0408 12:51:41.270010  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:41.270072  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.275411  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:41.275495  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:41.319478  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:41.319517  433557 cri.go:89] found id: ""
	I0408 12:51:41.319529  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:41.319590  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.329956  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:41.330045  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:41.380017  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:41.380049  433557 cri.go:89] found id: ""
	I0408 12:51:41.380061  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:41.380131  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.384973  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:41.385077  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:41.429757  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:41.429786  433557 cri.go:89] found id: ""
	I0408 12:51:41.429798  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:41.429863  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.435404  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:41.435488  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:41.484998  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:41.485031  433557 cri.go:89] found id: ""
	I0408 12:51:41.485042  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:41.485111  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.489802  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:41.489878  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:41.543982  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.544016  433557 cri.go:89] found id: ""
	I0408 12:51:41.544028  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:41.544096  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.548766  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:41.548836  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:41.588398  433557 cri.go:89] found id: ""
	I0408 12:51:41.588425  433557 logs.go:276] 0 containers: []
	W0408 12:51:41.588433  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:41.588439  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:41.588498  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:41.635748  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:41.635771  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:41.635775  433557 cri.go:89] found id: ""
	I0408 12:51:41.635782  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:41.635849  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.641800  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.646173  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:41.646206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.717189  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:41.717228  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:41.779618  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:41.779653  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:41.840050  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:41.840092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:41.855982  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:41.856016  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:42.016416  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:42.016455  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:42.085493  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:42.085538  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:42.132590  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:42.132626  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:42.642069  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:42.642125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:42.708516  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:42.708566  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:42.759072  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:42.759125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:42.810189  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:42.810242  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:42.855931  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:42.855971  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.396658  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.414640  433557 api_server.go:72] duration metric: took 4m14.728700184s to wait for apiserver process to appear ...
	I0408 12:51:45.414671  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:51:45.414714  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.414772  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.460983  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:45.461012  433557 cri.go:89] found id: ""
	I0408 12:51:45.461023  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:45.461102  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.466928  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.467037  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.516723  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:45.516746  433557 cri.go:89] found id: ""
	I0408 12:51:45.516755  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:45.516813  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.521315  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.521413  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.560838  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.560865  433557 cri.go:89] found id: ""
	I0408 12:51:45.560876  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:45.560926  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.565852  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.565937  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.610154  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:45.610175  433557 cri.go:89] found id: ""
	I0408 12:51:45.610183  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:45.610229  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.615014  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.615098  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.658261  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:45.658292  433557 cri.go:89] found id: ""
	I0408 12:51:45.658304  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:45.658367  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.663148  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.663242  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:45.708805  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.708838  433557 cri.go:89] found id: ""
	I0408 12:51:45.708850  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:45.708906  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.713733  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:45.713800  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:45.763432  433557 cri.go:89] found id: ""
	I0408 12:51:45.763465  433557 logs.go:276] 0 containers: []
	W0408 12:51:45.763477  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:45.763486  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:45.763555  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:45.808689  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:45.808711  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.808715  433557 cri.go:89] found id: ""
	I0408 12:51:45.808723  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:45.808782  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.813386  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.818556  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:45.818589  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:42.553021  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:42.569100  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:42.569174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:42.612835  433881 cri.go:89] found id: ""
	I0408 12:51:42.612870  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.612882  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:42.612891  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:42.612965  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:42.653224  433881 cri.go:89] found id: ""
	I0408 12:51:42.653266  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.653276  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:42.653285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:42.653351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:42.703612  433881 cri.go:89] found id: ""
	I0408 12:51:42.703648  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.703658  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:42.703664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:42.703756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:42.749765  433881 cri.go:89] found id: ""
	I0408 12:51:42.749799  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.749810  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:42.749818  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:42.749894  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:42.794008  433881 cri.go:89] found id: ""
	I0408 12:51:42.794042  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.794054  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:42.794064  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:42.794132  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:42.838099  433881 cri.go:89] found id: ""
	I0408 12:51:42.838134  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.838146  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:42.838154  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:42.838223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:42.883552  433881 cri.go:89] found id: ""
	I0408 12:51:42.883589  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.883602  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:42.883615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:42.883712  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:42.922871  433881 cri.go:89] found id: ""
	I0408 12:51:42.922899  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.922910  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:42.922922  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:42.922958  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:42.979842  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:42.979885  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:42.995164  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:42.995198  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:43.075880  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:43.075906  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:43.075940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:43.164047  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:43.164113  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:45.733586  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.749054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.749158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.793132  433881 cri.go:89] found id: ""
	I0408 12:51:45.793169  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.793181  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:45.793189  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.793257  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.834562  433881 cri.go:89] found id: ""
	I0408 12:51:45.834597  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.834608  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:45.834616  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.834686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.876365  433881 cri.go:89] found id: ""
	I0408 12:51:45.876404  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.876415  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:45.876424  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.876489  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.926205  433881 cri.go:89] found id: ""
	I0408 12:51:45.926241  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.926254  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:45.926262  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.926331  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.969462  433881 cri.go:89] found id: ""
	I0408 12:51:45.969494  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.969506  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:45.969513  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.969582  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:46.011980  433881 cri.go:89] found id: ""
	I0408 12:51:46.012008  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.012031  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:46.012040  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:46.012098  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:46.054484  433881 cri.go:89] found id: ""
	I0408 12:51:46.054522  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.054533  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:46.054542  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:46.054609  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:46.094438  433881 cri.go:89] found id: ""
	I0408 12:51:46.094468  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.094477  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:46.094486  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.094503  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:46.186390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:46.186415  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.186437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.283200  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.283240  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:46.336507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.336544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.392178  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.392221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:43.429577  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:45.431057  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:47.106109  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:48.599265  433674 pod_ready.go:81] duration metric: took 4m0.000260398s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:48.599302  433674 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:51:48.599335  433674 pod_ready.go:38] duration metric: took 4m13.995684279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:48.599373  433674 kubeadm.go:591] duration metric: took 4m22.072516751s to restartPrimaryControlPlane
	W0408 12:51:48.599529  433674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:48.599619  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:45.864458  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:45.864503  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.907964  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:45.908000  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.980082  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:45.980123  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:46.041294  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:46.041330  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:46.102117  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:46.102171  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:46.188553  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:46.188583  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:46.234191  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:46.234229  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:46.281240  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.281273  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.721047  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.721092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.781387  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.781429  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:46.797003  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.797043  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:46.917073  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.917109  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:49.481948  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:51:49.488261  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:51:49.489694  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:51:49.489726  433557 api_server.go:131] duration metric: took 4.075047023s to wait for apiserver health ...
	I0408 12:51:49.489737  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:51:49.489772  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:49.489845  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:49.535955  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.535980  433557 cri.go:89] found id: ""
	I0408 12:51:49.535990  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:49.536052  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.543143  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:49.543239  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.590041  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:49.590075  433557 cri.go:89] found id: ""
	I0408 12:51:49.590087  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:49.590155  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.595726  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.595803  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.645009  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:49.645046  433557 cri.go:89] found id: ""
	I0408 12:51:49.645057  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:49.645110  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.650243  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.650329  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.693859  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:49.693882  433557 cri.go:89] found id: ""
	I0408 12:51:49.693895  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:49.693972  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.699620  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.699709  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.755614  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:49.755646  433557 cri.go:89] found id: ""
	I0408 12:51:49.755657  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:49.755739  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.761838  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.761913  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.808919  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:49.808950  433557 cri.go:89] found id: ""
	I0408 12:51:49.808961  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:49.809040  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.813965  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.814046  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.859700  433557 cri.go:89] found id: ""
	I0408 12:51:49.859737  433557 logs.go:276] 0 containers: []
	W0408 12:51:49.859748  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.859757  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:49.859832  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:49.908020  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:49.908044  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:49.908050  433557 cri.go:89] found id: ""
	I0408 12:51:49.908060  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:49.908129  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.913034  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.919193  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:49.919233  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.984657  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.984704  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:50.003487  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:50.003526  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:50.139417  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:50.139481  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:50.240166  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:50.240206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:50.288776  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:50.288823  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:50.339222  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:50.339252  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:50.402263  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:50.402308  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:50.461894  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:50.461946  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:50.507329  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:50.507373  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:50.576851  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:50.576894  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:48.908956  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:48.932321  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:48.932414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:48.988509  433881 cri.go:89] found id: ""
	I0408 12:51:48.988542  433881 logs.go:276] 0 containers: []
	W0408 12:51:48.988554  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:48.988563  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:48.988632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.026573  433881 cri.go:89] found id: ""
	I0408 12:51:49.026605  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.026613  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:49.026618  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.026681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.072625  433881 cri.go:89] found id: ""
	I0408 12:51:49.072661  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.072675  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:49.072684  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.072748  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.120630  433881 cri.go:89] found id: ""
	I0408 12:51:49.120662  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.120674  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:49.120683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.120743  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.169189  433881 cri.go:89] found id: ""
	I0408 12:51:49.169218  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.169231  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:49.169239  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.169307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.216077  433881 cri.go:89] found id: ""
	I0408 12:51:49.216115  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.216128  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:49.216141  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.216209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.258519  433881 cri.go:89] found id: ""
	I0408 12:51:49.258556  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.258568  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.258576  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:49.258658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:49.298058  433881 cri.go:89] found id: ""
	I0408 12:51:49.298092  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.298103  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:49.298117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:49.298133  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:49.351961  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.352020  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:49.369774  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:49.369822  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:49.465570  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:49.465598  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:49.465616  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:49.551701  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:49.551753  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:47.932221  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.430702  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.947824  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:50.947878  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:51.007034  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:51.007084  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:53.563768  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:51:53.563811  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.563818  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.563824  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.563829  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.563835  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.563840  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.563850  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.563857  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.563870  433557 system_pods.go:74] duration metric: took 4.074125222s to wait for pod list to return data ...
	I0408 12:51:53.563884  433557 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:51:53.566991  433557 default_sa.go:45] found service account: "default"
	I0408 12:51:53.567015  433557 default_sa.go:55] duration metric: took 3.122873ms for default service account to be created ...
	I0408 12:51:53.567024  433557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:51:53.574517  433557 system_pods.go:86] 8 kube-system pods found
	I0408 12:51:53.574558  433557 system_pods.go:89] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.574565  433557 system_pods.go:89] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.574570  433557 system_pods.go:89] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.574575  433557 system_pods.go:89] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.574581  433557 system_pods.go:89] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.574587  433557 system_pods.go:89] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.574598  433557 system_pods.go:89] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.574605  433557 system_pods.go:89] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.574616  433557 system_pods.go:126] duration metric: took 7.585497ms to wait for k8s-apps to be running ...
	I0408 12:51:53.574629  433557 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:51:53.574720  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:53.597605  433557 system_svc.go:56] duration metric: took 22.957663ms WaitForService to wait for kubelet
	I0408 12:51:53.597658  433557 kubeadm.go:576] duration metric: took 4m22.91172229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:51:53.597683  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:51:53.601940  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:51:53.601992  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:51:53.602009  433557 node_conditions.go:105] duration metric: took 4.320913ms to run NodePressure ...
	I0408 12:51:53.602028  433557 start.go:240] waiting for startup goroutines ...
	I0408 12:51:53.602040  433557 start.go:245] waiting for cluster config update ...
	I0408 12:51:53.602060  433557 start.go:254] writing updated cluster config ...
	I0408 12:51:53.602426  433557 ssh_runner.go:195] Run: rm -f paused
	I0408 12:51:53.660257  433557 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0408 12:51:53.662533  433557 out.go:177] * Done! kubectl is now configured to use "no-preload-135234" cluster and "default" namespace by default
	I0408 12:51:52.104186  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:52.125116  433881 kubeadm.go:591] duration metric: took 4m3.004969382s to restartPrimaryControlPlane
	W0408 12:51:52.125203  433881 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:52.125233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:54.046318  433881 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.921055247s)
	I0408 12:51:54.046411  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:54.061948  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:51:54.073014  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:51:54.083545  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:51:54.083566  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:51:54.083623  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:51:54.093457  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:51:54.093541  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:51:54.104924  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:51:54.114649  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:51:54.114733  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:51:54.125143  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.135209  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:51:54.135283  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.146586  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:51:54.157676  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:51:54.157740  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:51:54.168585  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:51:54.411949  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:51:52.434513  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:54.930343  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:57.432046  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:59.436031  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:01.930142  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:03.931249  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:06.429806  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:08.929311  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:10.929707  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:13.430287  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:15.430449  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:17.933664  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:20.428983  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:21.300307  433674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.700649463s)
	I0408 12:52:21.300429  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:21.321628  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:21.334359  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:21.345697  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:21.345755  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:21.345804  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:52:21.356798  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:21.356868  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:21.368622  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:52:21.379589  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:21.379676  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:21.391211  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.401783  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:21.401874  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.413655  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:52:21.424585  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:21.424673  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:21.436887  433674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:21.495891  433674 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:21.496022  433674 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:21.667820  433674 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:21.667973  433674 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:21.668100  433674 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:21.904532  433674 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:21.906631  433674 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:21.906736  433674 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:21.906833  433674 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:21.906962  433674 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:21.907084  433674 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:21.907206  433674 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:21.907283  433674 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:21.907372  433674 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:21.907705  433674 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:21.908164  433674 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:21.908536  433674 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:21.908852  433674 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:21.908942  433674 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:22.096319  433674 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:22.286425  433674 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:22.442534  433674 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:22.542901  433674 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:22.959098  433674 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:22.959656  433674 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:22.962359  433674 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:22.965011  433674 out.go:204]   - Booting up control plane ...
	I0408 12:52:22.965148  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:22.965830  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:22.966718  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:22.987425  433674 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:22.988618  433674 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:22.988690  433674 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:23.134634  433674 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:52:22.429735  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.431237  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.923026  433439 pod_ready.go:81] duration metric: took 4m0.000804438s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	E0408 12:52:24.923079  433439 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:52:24.923103  433439 pod_ready.go:38] duration metric: took 4m6.498748448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:24.923143  433439 kubeadm.go:591] duration metric: took 4m14.484131334s to restartPrimaryControlPlane
	W0408 12:52:24.923222  433439 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:52:24.923260  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:52:29.641484  433674 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.505486 seconds
	I0408 12:52:29.659612  433674 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:52:29.683882  433674 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:52:30.237806  433674 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:52:30.238135  433674 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-488947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:52:30.755095  433674 kubeadm.go:309] [bootstrap-token] Using token: kwhj7g.e2hm9yupaxknooep
	I0408 12:52:30.756904  433674 out.go:204]   - Configuring RBAC rules ...
	I0408 12:52:30.757044  433674 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:52:30.763322  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:52:30.776489  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:52:30.780180  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:52:30.784949  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:52:30.789409  433674 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:52:30.810228  433674 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:52:31.071672  433674 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:52:31.180390  433674 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:52:31.180421  433674 kubeadm.go:309] 
	I0408 12:52:31.180493  433674 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:52:31.180504  433674 kubeadm.go:309] 
	I0408 12:52:31.180626  433674 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:52:31.180652  433674 kubeadm.go:309] 
	I0408 12:52:31.180682  433674 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:52:31.180758  433674 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:52:31.180823  433674 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:52:31.180835  433674 kubeadm.go:309] 
	I0408 12:52:31.180898  433674 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:52:31.180908  433674 kubeadm.go:309] 
	I0408 12:52:31.180967  433674 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:52:31.180978  433674 kubeadm.go:309] 
	I0408 12:52:31.181069  433674 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:52:31.181200  433674 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:52:31.181301  433674 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:52:31.181312  433674 kubeadm.go:309] 
	I0408 12:52:31.181446  433674 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:52:31.181564  433674 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:52:31.181577  433674 kubeadm.go:309] 
	I0408 12:52:31.181706  433674 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.181869  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:52:31.181923  433674 kubeadm.go:309] 	--control-plane 
	I0408 12:52:31.181933  433674 kubeadm.go:309] 
	I0408 12:52:31.182039  433674 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:52:31.182055  433674 kubeadm.go:309] 
	I0408 12:52:31.182167  433674 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.182323  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:52:31.182467  433674 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:52:31.182492  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:52:31.182502  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:52:31.184299  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:52:31.185716  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:52:31.217708  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:52:31.277627  433674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:52:31.277716  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:31.277740  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-488947 minikube.k8s.io/updated_at=2024_04_08T12_52_31_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=embed-certs-488947 minikube.k8s.io/primary=true
	I0408 12:52:31.591490  433674 ops.go:34] apiserver oom_adj: -16
	I0408 12:52:31.591651  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.092642  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.591845  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.092645  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.592585  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.092066  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.592232  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.091882  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.591794  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.091849  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.592616  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.091816  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.091756  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.592114  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.092524  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.591838  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.091853  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.591747  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.092421  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.592611  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.092369  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.092638  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.592549  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.091831  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.592358  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.799776  433674 kubeadm.go:1107] duration metric: took 13.522136387s to wait for elevateKubeSystemPrivileges
	W0408 12:52:44.799833  433674 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:52:44.799845  433674 kubeadm.go:393] duration metric: took 5m18.325910079s to StartCluster
	I0408 12:52:44.799870  433674 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.799981  433674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:52:44.802396  433674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.802704  433674 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:52:44.804525  433674 out.go:177] * Verifying Kubernetes components...
	I0408 12:52:44.802776  433674 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:52:44.802921  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:52:44.805724  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:52:44.805735  433674 addons.go:69] Setting metrics-server=true in profile "embed-certs-488947"
	I0408 12:52:44.805751  433674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-488947"
	I0408 12:52:44.805777  433674 addons.go:234] Setting addon metrics-server=true in "embed-certs-488947"
	W0408 12:52:44.805792  433674 addons.go:243] addon metrics-server should already be in state true
	I0408 12:52:44.805824  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805727  433674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-488947"
	I0408 12:52:44.805869  433674 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-488947"
	W0408 12:52:44.805883  433674 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:52:44.805915  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805834  433674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-488947"
	I0408 12:52:44.806260  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806262  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806266  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806286  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806288  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806326  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.824170  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I0408 12:52:44.824862  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.825517  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.825547  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.826049  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.826714  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.826752  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.827345  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0408 12:52:44.827569  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0408 12:52:44.828195  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828218  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828860  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.828892  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829023  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.829040  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829499  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829541  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829687  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.830201  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.830247  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.834128  433674 addons.go:234] Setting addon default-storageclass=true in "embed-certs-488947"
	W0408 12:52:44.834156  433674 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:52:44.834189  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.834569  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.834611  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.845829  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0408 12:52:44.846556  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.847545  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.847571  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.848210  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.848478  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.850407  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.850783  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0408 12:52:44.853144  433674 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:52:44.851322  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.854214  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0408 12:52:44.855198  433674 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:44.855222  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:52:44.855245  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.855434  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.855766  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855797  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.855936  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855956  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.856190  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856264  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856382  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.856937  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.856973  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.857994  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.859623  433674 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:52:44.860991  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:52:44.861012  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:52:44.858778  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.861032  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.861051  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.861072  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.859293  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.861282  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.861617  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.861817  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.863813  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864274  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.864299  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864483  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.864681  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.864846  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.865028  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.874355  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0408 12:52:44.874834  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.875388  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.875418  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.875775  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.875967  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.877519  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.877786  433674 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:44.877803  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:52:44.877818  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.880463  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.880846  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.880874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.881040  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.881234  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.881615  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.881753  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:45.057304  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:52:45.082702  433674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.091955  433674 node_ready.go:49] node "embed-certs-488947" has status "Ready":"True"
	I0408 12:52:45.091994  433674 node_ready.go:38] duration metric: took 9.246027ms for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.092007  433674 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:45.101654  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:45.237037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:52:45.237068  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:52:45.238421  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:45.274088  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:45.295037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:52:45.295078  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:52:45.397474  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:45.397504  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:52:45.431610  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:46.375681  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101541881s)
	I0408 12:52:46.375827  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.375862  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376204  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.376244  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.137166571s)
	I0408 12:52:46.376271  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.376291  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.376309  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376313  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376319  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.377184  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377205  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377613  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.377680  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377699  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377709  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.377747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.378168  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.378182  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.413325  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.413361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.413757  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.413780  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.679538  433674 pod_ready.go:92] pod "coredns-76f75df574-4gdp4" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.679577  433674 pod_ready.go:81] duration metric: took 1.577895468s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.679596  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760007  433674 pod_ready.go:92] pod "coredns-76f75df574-r5rxq" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.760043  433674 pod_ready.go:81] duration metric: took 80.437752ms for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760059  433674 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.803070  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.371401052s)
	I0408 12:52:46.803136  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803150  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803496  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803519  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803530  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803539  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803846  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803862  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803882  433674 addons.go:470] Verifying addon metrics-server=true in "embed-certs-488947"
	I0408 12:52:46.806034  433674 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0408 12:52:46.804164  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.807597  433674 pod_ready.go:81] duration metric: took 47.521367ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807622  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807621  433674 addons.go:505] duration metric: took 2.004847213s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0408 12:52:46.827049  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.827075  433674 pod_ready.go:81] duration metric: took 19.440746ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.827086  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848718  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.848759  433674 pod_ready.go:81] duration metric: took 21.664037ms for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848775  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087350  433674 pod_ready.go:92] pod "kube-proxy-mqrtp" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.087387  433674 pod_ready.go:81] duration metric: took 238.602902ms for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087403  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486822  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.486863  433674 pod_ready.go:81] duration metric: took 399.44977ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486875  433674 pod_ready.go:38] duration metric: took 2.394853452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:47.486895  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:52:47.486967  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:52:47.517426  433674 api_server.go:72] duration metric: took 2.714672176s to wait for apiserver process to appear ...
	I0408 12:52:47.517461  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:52:47.517492  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:52:47.527074  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:52:47.528230  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:52:47.528285  433674 api_server.go:131] duration metric: took 10.815426ms to wait for apiserver health ...
	I0408 12:52:47.528296  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:52:47.692054  433674 system_pods.go:59] 9 kube-system pods found
	I0408 12:52:47.692091  433674 system_pods.go:61] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:47.692096  433674 system_pods.go:61] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:47.692101  433674 system_pods.go:61] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:47.692105  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:47.692109  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:47.692112  433674 system_pods.go:61] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:47.692116  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:47.692123  433674 system_pods.go:61] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:47.692129  433674 system_pods.go:61] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:47.692137  433674 system_pods.go:74] duration metric: took 163.833038ms to wait for pod list to return data ...
	I0408 12:52:47.692146  433674 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:52:47.886668  433674 default_sa.go:45] found service account: "default"
	I0408 12:52:47.886695  433674 default_sa.go:55] duration metric: took 194.543392ms for default service account to be created ...
	I0408 12:52:47.886707  433674 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:52:48.090174  433674 system_pods.go:86] 9 kube-system pods found
	I0408 12:52:48.090212  433674 system_pods.go:89] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:48.090217  433674 system_pods.go:89] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:48.090222  433674 system_pods.go:89] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:48.090226  433674 system_pods.go:89] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:48.090232  433674 system_pods.go:89] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:48.090236  433674 system_pods.go:89] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:48.090240  433674 system_pods.go:89] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:48.090248  433674 system_pods.go:89] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:48.090253  433674 system_pods.go:89] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:48.090260  433674 system_pods.go:126] duration metric: took 203.547421ms to wait for k8s-apps to be running ...
	I0408 12:52:48.090266  433674 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:52:48.090312  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:48.106285  433674 system_svc.go:56] duration metric: took 15.998172ms WaitForService to wait for kubelet
	I0408 12:52:48.106322  433674 kubeadm.go:576] duration metric: took 3.303579521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:52:48.106345  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:52:48.287351  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:52:48.287381  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:52:48.287392  433674 node_conditions.go:105] duration metric: took 181.042972ms to run NodePressure ...
	I0408 12:52:48.287403  433674 start.go:240] waiting for startup goroutines ...
	I0408 12:52:48.287410  433674 start.go:245] waiting for cluster config update ...
	I0408 12:52:48.287419  433674 start.go:254] writing updated cluster config ...
	I0408 12:52:48.287738  433674 ssh_runner.go:195] Run: rm -f paused
	I0408 12:52:48.341532  433674 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:52:48.343890  433674 out.go:177] * Done! kubectl is now configured to use "embed-certs-488947" cluster and "default" namespace by default
	I0408 12:52:57.475303  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.552015668s)
	I0408 12:52:57.475390  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:57.492800  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:57.507211  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:57.520174  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:57.520203  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:57.520267  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:52:57.531854  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:57.531939  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:57.543764  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:52:57.555407  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:57.555479  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:57.569452  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.580478  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:57.580575  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.591819  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:52:57.602496  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:57.602589  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:57.613811  433439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:57.669998  433439 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:57.670137  433439 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:57.830674  433439 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:57.830802  433439 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:57.830882  433439 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:58.090382  433439 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:58.092626  433439 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:58.092733  433439 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:58.092809  433439 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:58.092906  433439 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:58.093027  433439 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:58.093130  433439 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:58.093202  433439 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:58.093547  433439 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:58.093941  433439 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:58.094342  433439 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:58.094708  433439 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:58.095077  433439 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:58.095159  433439 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:58.328890  433439 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:58.516475  433439 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:58.830765  433439 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:59.052737  433439 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:59.306668  433439 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:59.307305  433439 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:59.312102  433439 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:59.314983  433439 out.go:204]   - Booting up control plane ...
	I0408 12:52:59.315104  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:59.315191  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:59.315305  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:59.334624  433439 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:59.335637  433439 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:59.335713  433439 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:59.486408  433439 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:05.490227  433439 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002996 seconds
	I0408 12:53:05.526221  433439 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:53:05.553758  433439 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:53:06.101116  433439 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:53:06.101340  433439 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-527454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:53:06.616939  433439 kubeadm.go:309] [bootstrap-token] Using token: oe56hb.uz3a0dd96vnry1w3
	I0408 12:53:06.618840  433439 out.go:204]   - Configuring RBAC rules ...
	I0408 12:53:06.619038  433439 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:53:06.625364  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:53:06.638696  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:53:06.643811  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:53:06.647895  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:53:06.651857  433439 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:53:06.677056  433439 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:53:06.939588  433439 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:53:07.038633  433439 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:53:07.041464  433439 kubeadm.go:309] 
	I0408 12:53:07.041565  433439 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:53:07.041578  433439 kubeadm.go:309] 
	I0408 12:53:07.041680  433439 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:53:07.041699  433439 kubeadm.go:309] 
	I0408 12:53:07.041723  433439 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:53:07.041824  433439 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:53:07.041906  433439 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:53:07.041917  433439 kubeadm.go:309] 
	I0408 12:53:07.041988  433439 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:53:07.041998  433439 kubeadm.go:309] 
	I0408 12:53:07.042103  433439 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:53:07.042123  433439 kubeadm.go:309] 
	I0408 12:53:07.042168  433439 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:53:07.042253  433439 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:53:07.042351  433439 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:53:07.042361  433439 kubeadm.go:309] 
	I0408 12:53:07.042588  433439 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:53:07.042708  433439 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:53:07.042719  433439 kubeadm.go:309] 
	I0408 12:53:07.042823  433439 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.042959  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:53:07.042994  433439 kubeadm.go:309] 	--control-plane 
	I0408 12:53:07.043003  433439 kubeadm.go:309] 
	I0408 12:53:07.043127  433439 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:53:07.043143  433439 kubeadm.go:309] 
	I0408 12:53:07.043253  433439 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.043400  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:53:07.043583  433439 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:53:07.043608  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:53:07.043620  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:53:07.045283  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:53:07.046614  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:53:07.074907  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:53:07.107168  433439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:53:07.107232  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.107256  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-527454 minikube.k8s.io/updated_at=2024_04_08T12_53_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=default-k8s-diff-port-527454 minikube.k8s.io/primary=true
	I0408 12:53:07.208551  433439 ops.go:34] apiserver oom_adj: -16
	I0408 12:53:07.395206  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.896090  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.396097  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.896240  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.395654  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.895751  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.396242  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.896204  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.395766  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.895555  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.396014  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.896092  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.395507  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.895832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.395237  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.895333  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.396191  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.895561  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.395832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.895785  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.395460  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.895320  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.395826  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.896002  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.396326  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.514796  433439 kubeadm.go:1107] duration metric: took 12.407623504s to wait for elevateKubeSystemPrivileges
	W0408 12:53:19.514843  433439 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:53:19.514856  433439 kubeadm.go:393] duration metric: took 5m9.134867072s to StartCluster
	I0408 12:53:19.514882  433439 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.514981  433439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:53:19.516708  433439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.516988  433439 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:53:19.518597  433439 out.go:177] * Verifying Kubernetes components...
	I0408 12:53:19.517057  433439 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:53:19.517238  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:53:19.519990  433439 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520011  433439 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:19.520003  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0408 12:53:19.520052  433439 addons.go:243] addon metrics-server should already be in state true
	I0408 12:53:19.520095  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.519995  433439 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520161  433439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.520247  433439 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:53:19.520274  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.520519  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520521  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520555  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520616  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520639  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520556  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.536637  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0408 12:53:19.536896  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0408 12:53:19.536997  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0408 12:53:19.537194  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537369  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537453  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537748  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537772  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.537883  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537895  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538210  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538262  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538352  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.538372  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538815  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.538818  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538875  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.539030  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.542211  433439 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.542228  433439 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:53:19.542252  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.542841  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.542871  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.556920  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0408 12:53:19.557552  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0408 12:53:19.557712  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.557930  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.558468  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.558482  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.559174  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.559474  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.559852  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.559881  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.560358  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.561323  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.561357  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.561606  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.563808  433439 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:53:19.565205  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:53:19.565224  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:53:19.565252  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.565914  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0408 12:53:19.566710  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.567503  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.567521  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.568270  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.568656  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.568664  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.569109  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.569136  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.569294  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.569451  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.569707  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.569894  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.570455  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.572243  433439 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:53:19.573764  433439 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:19.573784  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:53:19.573804  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.576844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577310  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.577380  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577547  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.577851  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.578009  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.578154  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.579402  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0408 12:53:19.579860  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.580428  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.580448  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.581001  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.581202  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.582638  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.582913  433439 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:19.582929  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:53:19.582949  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.585995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586456  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.586488  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586665  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.586845  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.586974  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.587077  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.782606  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:53:19.822413  433439 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833467  433439 node_ready.go:49] node "default-k8s-diff-port-527454" has status "Ready":"True"
	I0408 12:53:19.833493  433439 node_ready.go:38] duration metric: took 11.040127ms for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833503  433439 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:19.845052  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:19.990826  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:20.027800  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:53:20.027827  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:53:20.066661  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:20.168240  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:53:20.168271  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:53:20.327307  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.327336  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:53:20.390128  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.455235  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455265  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455575  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455607  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.455618  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455628  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455912  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455929  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.494751  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.494778  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.495103  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.495126  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.495132  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.454862  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.388156991s)
	I0408 12:53:21.454942  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.454956  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455313  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.455368  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455377  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455386  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.455395  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455729  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455753  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455797  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.591677  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.201496165s)
	I0408 12:53:21.591745  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.591760  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.592145  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592183  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592199  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.592214  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592484  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592501  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592513  433439 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:21.594462  433439 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0408 12:53:21.595731  433439 addons.go:505] duration metric: took 2.078676652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0408 12:53:21.852741  433439 pod_ready.go:102] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"False"
	I0408 12:53:22.375241  433439 pod_ready.go:92] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.375283  433439 pod_ready.go:81] duration metric: took 2.53020032s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.375298  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.391968  433439 pod_ready.go:92] pod "coredns-76f75df574-z56lf" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.392003  433439 pod_ready.go:81] duration metric: took 16.695581ms for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.392018  433439 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398659  433439 pod_ready.go:92] pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.398688  433439 pod_ready.go:81] duration metric: took 6.657546ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398699  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407214  433439 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.407241  433439 pod_ready.go:81] duration metric: took 8.535246ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407252  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416605  433439 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.416632  433439 pod_ready.go:81] duration metric: took 9.374648ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416644  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750191  433439 pod_ready.go:92] pod "kube-proxy-tlhff" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.750220  433439 pod_ready.go:81] duration metric: took 333.570363ms for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750231  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.148980  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:23.149009  433439 pod_ready.go:81] duration metric: took 398.771226ms for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.149018  433439 pod_ready.go:38] duration metric: took 3.315505787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:23.149034  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:53:23.149087  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:53:23.165120  433439 api_server.go:72] duration metric: took 3.648094543s to wait for apiserver process to appear ...
	I0408 12:53:23.165149  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:53:23.165170  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:53:23.171016  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:53:23.172486  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:53:23.172510  433439 api_server.go:131] duration metric: took 7.354957ms to wait for apiserver health ...
	I0408 12:53:23.172518  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:53:23.353807  433439 system_pods.go:59] 9 kube-system pods found
	I0408 12:53:23.353846  433439 system_pods.go:61] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.353853  433439 system_pods.go:61] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.353859  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.353866  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.353874  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.353879  433439 system_pods.go:61] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.353883  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.353890  433439 system_pods.go:61] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.353896  433439 system_pods.go:61] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.353911  433439 system_pods.go:74] duration metric: took 181.386053ms to wait for pod list to return data ...
	I0408 12:53:23.353923  433439 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:53:23.549663  433439 default_sa.go:45] found service account: "default"
	I0408 12:53:23.549702  433439 default_sa.go:55] duration metric: took 195.766529ms for default service account to be created ...
	I0408 12:53:23.549717  433439 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:53:23.755668  433439 system_pods.go:86] 9 kube-system pods found
	I0408 12:53:23.755729  433439 system_pods.go:89] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.755739  433439 system_pods.go:89] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.755748  433439 system_pods.go:89] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.755755  433439 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.755761  433439 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.755768  433439 system_pods.go:89] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.755774  433439 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.755787  433439 system_pods.go:89] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.755792  433439 system_pods.go:89] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.755805  433439 system_pods.go:126] duration metric: took 206.081481ms to wait for k8s-apps to be running ...
	I0408 12:53:23.755814  433439 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:53:23.755866  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:23.774910  433439 system_svc.go:56] duration metric: took 19.080727ms WaitForService to wait for kubelet
	I0408 12:53:23.774954  433439 kubeadm.go:576] duration metric: took 4.257931558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:53:23.774985  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:53:23.949588  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:53:23.949618  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:53:23.949630  433439 node_conditions.go:105] duration metric: took 174.638826ms to run NodePressure ...
	I0408 12:53:23.949642  433439 start.go:240] waiting for startup goroutines ...
	I0408 12:53:23.949649  433439 start.go:245] waiting for cluster config update ...
	I0408 12:53:23.949659  433439 start.go:254] writing updated cluster config ...
	I0408 12:53:23.949929  433439 ssh_runner.go:195] Run: rm -f paused
	I0408 12:53:24.004633  433439 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:53:24.007640  433439 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-527454" cluster and "default" namespace by default
	I0408 12:53:50.506496  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:53:50.506736  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:53:50.508871  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:50.508975  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:50.509090  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:50.509248  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:50.509435  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:50.509546  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:50.511505  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:50.511616  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:50.511727  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:50.511838  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:50.511925  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:50.512024  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:50.512112  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:50.512228  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:50.512332  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:50.512442  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:50.512551  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:50.512608  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:50.512661  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:50.512714  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:50.512784  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:50.512866  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:50.512934  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:50.513078  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:50.513228  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:50.513285  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:50.513383  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:50.515207  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:50.515297  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:50.515380  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:50.515449  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:50.515522  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:50.515668  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:50.515756  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:53:50.515843  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516036  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516118  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516346  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516428  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516675  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516747  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516990  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517092  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.517336  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517352  433881 kubeadm.go:309] 
	I0408 12:53:50.517402  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:53:50.517453  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:53:50.517463  433881 kubeadm.go:309] 
	I0408 12:53:50.517517  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:53:50.517572  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:53:50.517743  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:53:50.517757  433881 kubeadm.go:309] 
	I0408 12:53:50.517898  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:53:50.517949  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:53:50.517999  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:53:50.518014  433881 kubeadm.go:309] 
	I0408 12:53:50.518163  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:53:50.518286  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:53:50.518297  433881 kubeadm.go:309] 
	I0408 12:53:50.518448  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:53:50.518581  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:53:50.518686  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:53:50.518747  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:53:50.518781  433881 kubeadm.go:309] 
	W0408 12:53:50.518884  433881 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:53:50.518933  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:53:50.995302  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:51.011982  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:53:51.022491  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:53:51.022512  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:53:51.022565  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:53:51.032994  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:53:51.033071  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:53:51.043529  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:53:51.053500  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:53:51.053580  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:53:51.063658  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.073397  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:53:51.073464  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.085243  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:53:51.095094  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:53:51.095165  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:53:51.105549  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:53:51.185596  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:51.185706  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:51.349502  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:51.349661  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:51.349805  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:51.557584  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:51.559567  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:51.559701  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:51.559800  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:51.559968  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:51.560065  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:51.560159  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:51.560241  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:51.560337  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:51.560443  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:51.560561  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:51.560680  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:51.560735  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:51.560826  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:51.727630  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:51.895665  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:52.087304  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:52.187789  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:52.213627  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:52.213777  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:52.213837  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:52.384599  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:52.386843  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:52.386992  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:52.389989  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:52.393527  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:52.394471  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:52.405071  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:54:32.408240  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:54:32.408440  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:32.408738  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:37.409255  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:37.409493  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:47.409946  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:47.410234  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:07.410503  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:07.410710  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.409536  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:47.410032  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.410062  433881 kubeadm.go:309] 
	I0408 12:55:47.410118  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:55:47.410216  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:55:47.410232  433881 kubeadm.go:309] 
	I0408 12:55:47.410278  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:55:47.410341  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:55:47.410503  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:55:47.410515  433881 kubeadm.go:309] 
	I0408 12:55:47.410691  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:55:47.410768  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:55:47.410833  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:55:47.410843  433881 kubeadm.go:309] 
	I0408 12:55:47.411002  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:55:47.411092  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:55:47.411099  433881 kubeadm.go:309] 
	I0408 12:55:47.411208  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:55:47.411325  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:55:47.411415  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:55:47.411523  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:55:47.411534  433881 kubeadm.go:309] 
	I0408 12:55:47.413655  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:55:47.413779  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:55:47.413887  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:55:47.414099  433881 kubeadm.go:393] duration metric: took 7m58.347147979s to StartCluster
	I0408 12:55:47.414206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:55:47.414540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:55:47.466864  433881 cri.go:89] found id: ""
	I0408 12:55:47.466899  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.466909  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:55:47.466917  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:55:47.466999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:55:47.505562  433881 cri.go:89] found id: ""
	I0408 12:55:47.505590  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.505599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:55:47.505606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:55:47.505663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:55:47.545030  433881 cri.go:89] found id: ""
	I0408 12:55:47.545063  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.545075  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:55:47.545086  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:55:47.545158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:55:47.584650  433881 cri.go:89] found id: ""
	I0408 12:55:47.584685  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.584698  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:55:47.584707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:55:47.584775  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:55:47.624857  433881 cri.go:89] found id: ""
	I0408 12:55:47.624885  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.624893  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:55:47.624900  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:55:47.624953  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:55:47.662872  433881 cri.go:89] found id: ""
	I0408 12:55:47.662910  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.662922  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:55:47.662931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:55:47.662999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:55:47.702086  433881 cri.go:89] found id: ""
	I0408 12:55:47.702132  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.702142  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:55:47.702148  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:55:47.702198  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:55:47.754880  433881 cri.go:89] found id: ""
	I0408 12:55:47.754912  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.754922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:55:47.754932  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:55:47.754946  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:55:47.839768  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:55:47.839800  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:55:47.839817  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:55:47.947231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:55:47.947281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:55:47.997692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:55:47.997725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:55:48.050804  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:55:48.050854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0408 12:55:48.067168  433881 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:55:48.067218  433881 out.go:239] * 
	W0408 12:55:48.067277  433881 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.067305  433881 out.go:239] * 
	W0408 12:55:48.068281  433881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:55:48.072609  433881 out.go:177] 
	W0408 12:55:48.074039  433881 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.074112  433881 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:55:48.074174  433881 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:55:48.076570  433881 out.go:177] 
	
	
	==> CRI-O <==
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.565981940Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1adac7b51d5614d8a9390b1cca3287796486136bc4510462a1c061e09c48da8e,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-87ddx,Uid:9e6f83bf-7954-4003-b66a-e62d52985947,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580767037231305,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-87ddx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e6f83bf-7954-4003-b66a-e62d52985947,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-08T12:52:46.425636250Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3ae5294d-2336-46b7-b2e8-25d6664d2c62,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580766697527670,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-08T12:52:46.382288299Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&PodSandboxMetadata{Name:kube-proxy-mqrtp,Uid:1035043f-eea0-4b45-a2df-18d477a54ae9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580764817009577,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-08T12:52:43.905562170Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-r5rxq,Ui
d:d8b96604-1b62-462c-94b9-91d009b7f20e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580764688554995,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-08T12:52:44.366613028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-4gdp4,Uid:a6d8a54f-673e-495d-a0f7-fb03ff7b447b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580764656038260,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d8a54f-673e-495d-a0f7-fb03ff7b447b,k8s-app: kube-dns,pod-templa
te-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-08T12:52:44.343401240Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-488947,Uid:98210bf73b7884f23baa7499ebf47a51,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580744226951474,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.159:8443,kubernetes.io/config.hash: 98210bf73b7884f23baa7499ebf47a51,kubernetes.io/config.seen: 2024-04-08T12:52:23.741359315Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:af7efdb70021a9cc5369014b9f7f
1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-488947,Uid:10a373ca7d5307749e7ac8e52c7d9187,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580744209514386,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10a373ca7d5307749e7ac8e52c7d9187,kubernetes.io/config.seen: 2024-04-08T12:52:23.741361508Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-488947,Uid:b31daee4d6a3afaf7bb8490632992b25,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580744199658627,Labels:map[string]string{component: kube-controller-mana
ger,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b31daee4d6a3afaf7bb8490632992b25,kubernetes.io/config.seen: 2024-04-08T12:52:23.741360599Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-488947,Uid:05ec65011bfa809c442830b77b914e27,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1712580744194682518,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.7
2.159:2379,kubernetes.io/config.hash: 05ec65011bfa809c442830b77b914e27,kubernetes.io/config.seen: 2024-04-08T12:52:23.741353771Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=968b5dad-1da0-4865-9a28-75ec323682ad name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.567323553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b8c031b-b73f-45eb-b33d-e6090d28e302 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.567411151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b8c031b-b73f-45eb-b33d-e6090d28e302 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.567624069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0,PodSandboxId:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580766872721378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{io.kubernetes.container.hash: da5bb5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3,PodSandboxId:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765526826057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,},Annotations:map[string]string{io.kubernetes.container.hash: 496baa91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68,PodSandboxId:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765403578684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
6d8a54f-673e-495d-a0f7-fb03ff7b447b,},Annotations:map[string]string{io.kubernetes.container.hash: f3a90b0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c,PodSandboxId:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1712580765123796808,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,},Annotations:map[string]string{io.kubernetes.container.hash: 7842f57d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893,PodSandboxId:af7efdb70021a9cc5369014b9f7f1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580744517071198,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9,PodSandboxId:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580744512776628,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75,PodSandboxId:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580744440226096,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,},Annotations:map[string]string{io.kubernetes.container.hash: db6b0384,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466,PodSandboxId:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580744397796719,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,},Annotations:map[string]string{io.kubernetes.container.hash: 39fb96b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b8c031b-b73f-45eb-b33d-e6090d28e302 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.607007113Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba8a046c-b2ea-48fa-9dfe-09cff55b5499 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.607137595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba8a046c-b2ea-48fa-9dfe-09cff55b5499 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.608735397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2ea8065-7f3a-4624-86f2-791be1b03f9a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.609125781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581310609105199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2ea8065-7f3a-4624-86f2-791be1b03f9a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.609765615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35132aff-25d3-4eeb-96c2-3cca4dc35a21 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.609834594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35132aff-25d3-4eeb-96c2-3cca4dc35a21 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.610016027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0,PodSandboxId:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580766872721378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{io.kubernetes.container.hash: da5bb5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3,PodSandboxId:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765526826057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,},Annotations:map[string]string{io.kubernetes.container.hash: 496baa91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68,PodSandboxId:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765403578684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
6d8a54f-673e-495d-a0f7-fb03ff7b447b,},Annotations:map[string]string{io.kubernetes.container.hash: f3a90b0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c,PodSandboxId:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1712580765123796808,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,},Annotations:map[string]string{io.kubernetes.container.hash: 7842f57d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893,PodSandboxId:af7efdb70021a9cc5369014b9f7f1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580744517071198,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9,PodSandboxId:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580744512776628,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75,PodSandboxId:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580744440226096,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,},Annotations:map[string]string{io.kubernetes.container.hash: db6b0384,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466,PodSandboxId:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580744397796719,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,},Annotations:map[string]string{io.kubernetes.container.hash: 39fb96b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35132aff-25d3-4eeb-96c2-3cca4dc35a21 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.655925044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23dbd7be-0d7a-4249-aea2-bf520141184d name=/runtime.v1.RuntimeService/Version
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.656030882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23dbd7be-0d7a-4249-aea2-bf520141184d name=/runtime.v1.RuntimeService/Version
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.657843377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b991d3af-8e2d-4166-8bc8-f93eb285dd76 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.658314411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581310658289186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b991d3af-8e2d-4166-8bc8-f93eb285dd76 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.658863897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=633e3617-d41a-4bf3-bf77-6c9ba6281247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.658935311Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=633e3617-d41a-4bf3-bf77-6c9ba6281247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.659126740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0,PodSandboxId:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580766872721378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{io.kubernetes.container.hash: da5bb5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3,PodSandboxId:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765526826057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,},Annotations:map[string]string{io.kubernetes.container.hash: 496baa91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68,PodSandboxId:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765403578684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
6d8a54f-673e-495d-a0f7-fb03ff7b447b,},Annotations:map[string]string{io.kubernetes.container.hash: f3a90b0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c,PodSandboxId:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1712580765123796808,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,},Annotations:map[string]string{io.kubernetes.container.hash: 7842f57d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893,PodSandboxId:af7efdb70021a9cc5369014b9f7f1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580744517071198,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9,PodSandboxId:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580744512776628,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75,PodSandboxId:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580744440226096,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,},Annotations:map[string]string{io.kubernetes.container.hash: db6b0384,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466,PodSandboxId:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580744397796719,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,},Annotations:map[string]string{io.kubernetes.container.hash: 39fb96b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=633e3617-d41a-4bf3-bf77-6c9ba6281247 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.700933781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcbaa3cb-c6bd-4137-8f15-eee807477064 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.701027714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcbaa3cb-c6bd-4137-8f15-eee807477064 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.702701383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6ea0670-bd9b-48ea-a0bb-0d2e096a70f4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.703104384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581310703082410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6ea0670-bd9b-48ea-a0bb-0d2e096a70f4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.703749429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ad05217-3620-4858-a4af-1b6c7dfa76cc name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.703806257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ad05217-3620-4858-a4af-1b6c7dfa76cc name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:01:50 embed-certs-488947 crio[717]: time="2024-04-08 13:01:50.704010212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0,PodSandboxId:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580766872721378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{io.kubernetes.container.hash: da5bb5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3,PodSandboxId:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765526826057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,},Annotations:map[string]string{io.kubernetes.container.hash: 496baa91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68,PodSandboxId:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765403578684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
6d8a54f-673e-495d-a0f7-fb03ff7b447b,},Annotations:map[string]string{io.kubernetes.container.hash: f3a90b0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c,PodSandboxId:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1712580765123796808,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,},Annotations:map[string]string{io.kubernetes.container.hash: 7842f57d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893,PodSandboxId:af7efdb70021a9cc5369014b9f7f1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580744517071198,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9,PodSandboxId:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580744512776628,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75,PodSandboxId:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580744440226096,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,},Annotations:map[string]string{io.kubernetes.container.hash: db6b0384,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466,PodSandboxId:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580744397796719,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,},Annotations:map[string]string{io.kubernetes.container.hash: 39fb96b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ad05217-3620-4858-a4af-1b6c7dfa76cc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	152b429d9251c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b15e4d07304b4       storage-provisioner
	b6e7739783a4f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   07173ed88181a       coredns-76f75df574-r5rxq
	a108c90e411d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   225aaee73a512       coredns-76f75df574-4gdp4
	4bc85eb1a4d2d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   c4f108b541515       kube-proxy-mqrtp
	7a1d769685f62       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   af7efdb70021a       kube-scheduler-embed-certs-488947
	5e6f9ed437945       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   1b3dadf9339e0       kube-controller-manager-embed-certs-488947
	3e8346ea478a6       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   6f258073dacfb       kube-apiserver-embed-certs-488947
	7b845af8e0eaa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   7accfcdd7ffc6       etcd-embed-certs-488947
	
	
	==> coredns [a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-488947
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-488947
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=embed-certs-488947
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T12_52_31_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:52:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-488947
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 13:01:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 12:57:57 +0000   Mon, 08 Apr 2024 12:52:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 12:57:57 +0000   Mon, 08 Apr 2024 12:52:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 12:57:57 +0000   Mon, 08 Apr 2024 12:52:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 12:57:57 +0000   Mon, 08 Apr 2024 12:52:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.159
	  Hostname:    embed-certs-488947
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 99e3652f67bd4ae8b7c4adf9bc2dc24b
	  System UUID:                99e3652f-67bd-4ae8-b7c4-adf9bc2dc24b
	  Boot ID:                    d547bf90-e1f6-45ad-9f8e-66de3ca49156
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-4gdp4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-76f75df574-r5rxq                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-488947                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-488947             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-488947    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-mqrtp                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-488947             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-87ddx               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node embed-certs-488947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node embed-certs-488947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node embed-certs-488947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-488947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-488947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-488947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-488947 event: Registered Node embed-certs-488947 in Controller
	
	
	==> dmesg <==
	[  +0.052977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041568] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.650129] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.986381] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.661754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.402355] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.067673] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068964] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.226490] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.142214] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.365713] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +5.207459] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +0.065528] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.526611] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.624343] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.339915] kauditd_printk_skb: 79 callbacks suppressed
	[Apr 8 12:52] kauditd_printk_skb: 6 callbacks suppressed
	[  +2.000954] systemd-fstab-generator[3569]: Ignoring "noauto" option for root device
	[  +6.786799] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.069831] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[ +13.789054] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.230625] systemd-fstab-generator[4209]: Ignoring "noauto" option for root device
	[Apr 8 12:53] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466] <==
	{"level":"info","ts":"2024-04-08T12:52:24.871233Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T12:52:24.871462Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d718283c8ba9c288","initial-advertise-peer-urls":["https://192.168.72.159:2380"],"listen-peer-urls":["https://192.168.72.159:2380"],"advertise-client-urls":["https://192.168.72.159:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.159:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-08T12:52:24.871513Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T12:52:24.871757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 switched to configuration voters=(15499182358101869192)"}
	{"level":"info","ts":"2024-04-08T12:52:24.871877Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f0e35e647fe17a2","local-member-id":"d718283c8ba9c288","added-peer-id":"d718283c8ba9c288","added-peer-peer-urls":["https://192.168.72.159:2380"]}
	{"level":"info","ts":"2024-04-08T12:52:24.871988Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.159:2380"}
	{"level":"info","ts":"2024-04-08T12:52:24.872017Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.159:2380"}
	{"level":"info","ts":"2024-04-08T12:52:25.094239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-08T12:52:25.094296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-08T12:52:25.094332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 received MsgPreVoteResp from d718283c8ba9c288 at term 1"}
	{"level":"info","ts":"2024-04-08T12:52:25.094344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became candidate at term 2"}
	{"level":"info","ts":"2024-04-08T12:52:25.09435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 received MsgVoteResp from d718283c8ba9c288 at term 2"}
	{"level":"info","ts":"2024-04-08T12:52:25.094358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became leader at term 2"}
	{"level":"info","ts":"2024-04-08T12:52:25.094368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d718283c8ba9c288 elected leader d718283c8ba9c288 at term 2"}
	{"level":"info","ts":"2024-04-08T12:52:25.098653Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:52:25.103597Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d718283c8ba9c288","local-member-attributes":"{Name:embed-certs-488947 ClientURLs:[https://192.168.72.159:2379]}","request-path":"/0/members/d718283c8ba9c288/attributes","cluster-id":"6f0e35e647fe17a2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T12:52:25.103833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:52:25.104412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:52:25.110977Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T12:52:25.10645Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0e35e647fe17a2","local-member-id":"d718283c8ba9c288","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:52:25.111217Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:52:25.111266Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:52:25.118969Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.159:2379"}
	{"level":"info","ts":"2024-04-08T12:52:25.119431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:52:25.119474Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:01:51 up 14 min,  0 users,  load average: 0.22, 0.16, 0.11
	Linux embed-certs-488947 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75] <==
	I0408 12:55:47.335211       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:57:27.473788       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:57:27.474355       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0408 12:57:28.475068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:57:28.475132       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 12:57:28.475243       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:57:28.475329       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:57:28.475431       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 12:57:28.476209       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:58:28.475645       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:58:28.475714       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 12:58:28.475730       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:58:28.477553       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:58:28.477674       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 12:58:28.477684       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:00:28.476676       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:00:28.477010       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:00:28.477040       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:00:28.478289       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:00:28.478397       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:00:28.478423       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9] <==
	I0408 12:56:13.798358       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:56:43.382023       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:56:43.807045       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:57:13.388670       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:57:13.816353       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:57:43.394711       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:57:43.825307       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:58:13.400413       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:58:13.834417       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 12:58:35.290240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="540.672µs"
	E0408 12:58:43.405611       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:58:43.845435       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 12:58:49.286126       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="182.011µs"
	E0408 12:59:13.413388       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:59:13.857787       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:59:43.419342       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:59:43.869741       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:00:13.426238       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:00:13.878674       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:00:43.432711       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:00:43.892207       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:01:13.439828       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:01:13.902626       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:01:43.446000       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:01:43.911742       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c] <==
	I0408 12:52:45.555780       1 server_others.go:72] "Using iptables proxy"
	I0408 12:52:45.579584       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.159"]
	I0408 12:52:45.673537       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 12:52:45.673562       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:52:45.673587       1 server_others.go:168] "Using iptables Proxier"
	I0408 12:52:45.720468       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:52:45.779386       1 server.go:865] "Version info" version="v1.29.3"
	I0408 12:52:45.779446       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:52:45.791604       1 config.go:188] "Starting service config controller"
	I0408 12:52:45.791717       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 12:52:45.791917       1 config.go:97] "Starting endpoint slice config controller"
	I0408 12:52:45.791995       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 12:52:45.798672       1 config.go:315] "Starting node config controller"
	I0408 12:52:45.798705       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 12:52:45.993912       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 12:52:45.993997       1 shared_informer.go:318] Caches are synced for service config
	I0408 12:52:46.000015       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893] <==
	W0408 12:52:28.527378       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 12:52:28.527433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 12:52:28.556348       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 12:52:28.556376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0408 12:52:28.597819       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 12:52:28.597985       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:52:28.687729       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 12:52:28.687878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 12:52:28.718468       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 12:52:28.719683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 12:52:28.745520       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 12:52:28.745717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 12:52:28.749525       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 12:52:28.749592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 12:52:28.769031       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 12:52:28.769128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 12:52:28.816344       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 12:52:28.816433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 12:52:28.817981       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 12:52:28.818015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 12:52:28.824272       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 12:52:28.824330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 12:52:28.856379       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 12:52:28.856445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0408 12:52:30.741188       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 12:59:31 embed-certs-488947 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 12:59:31 embed-certs-488947 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 12:59:31 embed-certs-488947 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 12:59:31 embed-certs-488947 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 12:59:42 embed-certs-488947 kubelet[3900]: E0408 12:59:42.269661    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 12:59:57 embed-certs-488947 kubelet[3900]: E0408 12:59:57.270793    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:00:08 embed-certs-488947 kubelet[3900]: E0408 13:00:08.269809    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:00:21 embed-certs-488947 kubelet[3900]: E0408 13:00:21.270942    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:00:31 embed-certs-488947 kubelet[3900]: E0408 13:00:31.347787    3900 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:00:31 embed-certs-488947 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:00:31 embed-certs-488947 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:00:31 embed-certs-488947 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:00:31 embed-certs-488947 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:00:36 embed-certs-488947 kubelet[3900]: E0408 13:00:36.268479    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:00:47 embed-certs-488947 kubelet[3900]: E0408 13:00:47.270987    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:00:59 embed-certs-488947 kubelet[3900]: E0408 13:00:59.268930    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:01:12 embed-certs-488947 kubelet[3900]: E0408 13:01:12.269466    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:01:25 embed-certs-488947 kubelet[3900]: E0408 13:01:25.269598    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:01:31 embed-certs-488947 kubelet[3900]: E0408 13:01:31.347137    3900 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:01:31 embed-certs-488947 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:01:31 embed-certs-488947 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:01:31 embed-certs-488947 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:01:31 embed-certs-488947 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:01:40 embed-certs-488947 kubelet[3900]: E0408 13:01:40.269354    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:01:51 embed-certs-488947 kubelet[3900]: E0408 13:01:51.270118    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	
	
	==> storage-provisioner [152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0] <==
	I0408 12:52:46.971944       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 12:52:46.983842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 12:52:46.983934       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 12:52:46.995635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 12:52:46.995836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-488947_f0ff17a7-d3f7-445a-8c1f-feb7aaa1e27e!
	I0408 12:52:46.997085       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69cb632f-180b-4beb-a8e5-6535117668c8", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-488947_f0ff17a7-d3f7-445a-8c1f-feb7aaa1e27e became leader
	I0408 12:52:47.096417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-488947_f0ff17a7-d3f7-445a-8c1f-feb7aaa1e27e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-488947 -n embed-certs-488947
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-488947 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-87ddx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-488947 describe pod metrics-server-57f55c9bc5-87ddx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-488947 describe pod metrics-server-57f55c9bc5-87ddx: exit status 1 (67.607049ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-87ddx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-488947 describe pod metrics-server-57f55c9bc5-87ddx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0408 12:53:31.891760  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:53:44.542814  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 12:54:13.957581  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:54:45.113636  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:54:54.937387  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:55:29.654992  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
E0408 12:55:37.003619  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-08 13:02:24.610139581 +0000 UTC m=+6122.396821775
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-527454 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-527454 logs -n 25: (2.188229883s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo cat                                               |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo containerd config dump                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl status crio                             |                              |         |                |                     |                     |
	|         | --all --full --no-pager                                |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl cat crio                                |                              |         |                |                     |                     |
	|         | --no-pager                                             |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |                |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |                |                     |                     |
	|         | \;                                                     |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo crio config                                       |                              |         |                |                     |                     |
	| delete  | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-122490 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | disable-driver-mounts-122490                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:42:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:42:31.610028  433881 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:42:31.610291  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610300  433881 out.go:304] Setting ErrFile to fd 2...
	I0408 12:42:31.610304  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610590  433881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:42:31.611834  433881 out.go:298] Setting JSON to false
	I0408 12:42:31.613323  433881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8695,"bootTime":1712571457,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:42:31.613413  433881 start.go:139] virtualization: kvm guest
	I0408 12:42:31.615441  433881 out.go:177] * [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:42:31.617429  433881 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:42:31.617459  433881 notify.go:220] Checking for updates...
	I0408 12:42:31.618918  433881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:42:31.620434  433881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:42:31.621883  433881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:42:31.623381  433881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:42:31.624858  433881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:42:31.626731  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:42:31.627141  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.627193  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.642980  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0408 12:42:31.643395  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.644144  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.644166  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.644557  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.644768  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.646980  433881 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 12:42:31.648378  433881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:42:31.648694  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.648732  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.663924  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0408 12:42:31.664361  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.664884  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.664910  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.665218  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.665445  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.701652  433881 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:42:31.703025  433881 start.go:297] selected driver: kvm2
	I0408 12:42:31.703041  433881 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.703192  433881 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:42:31.703924  433881 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.704018  433881 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:42:31.719599  433881 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:42:31.720001  433881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:42:31.720084  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:42:31.720102  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:42:31.720156  433881 start.go:340] cluster config:
	{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.720330  433881 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.722299  433881 out.go:177] * Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	I0408 12:42:31.723540  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:42:31.723577  433881 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:42:31.723594  433881 cache.go:56] Caching tarball of preloaded images
	I0408 12:42:31.723718  433881 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:42:31.723733  433881 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:42:31.723846  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:42:31.724039  433881 start.go:360] acquireMachinesLock for old-k8s-version-384148: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:42:32.207974  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:38.288048  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:41.359947  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:47.439972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:50.512009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:56.591982  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:59.664002  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:05.744032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:08.816017  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:14.895990  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:17.967942  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:24.048010  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:27.119964  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:33.200067  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:36.272037  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:42.351972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:45.424082  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:51.503992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:54.576088  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:00.656001  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:03.728079  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:09.807949  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:12.880051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:18.960024  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:22.032036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:28.112053  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:31.183992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:37.264032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:40.336026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:46.416019  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:49.487998  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:55.568026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:58.640044  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:04.719978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:07.792028  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:13.871997  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:16.944057  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:23.023969  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:26.096051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:32.176049  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:35.247929  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:41.328036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:44.399954  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:50.480046  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:53.552034  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:59.632009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:02.704063  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:08.784031  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:11.856098  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:17.936013  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:21.007970  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:27.087978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:30.159984  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:36.240042  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:39.245220  433557 start.go:364] duration metric: took 4m33.298555643s to acquireMachinesLock for "no-preload-135234"
	I0408 12:46:39.245298  433557 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:39.245311  433557 fix.go:54] fixHost starting: 
	I0408 12:46:39.245782  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:39.245821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:39.261035  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
	I0408 12:46:39.261632  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:39.262208  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:46:39.262234  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:39.262592  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:39.262819  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:39.262938  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:46:39.264995  433557 fix.go:112] recreateIfNeeded on no-preload-135234: state=Stopped err=<nil>
	I0408 12:46:39.265029  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	W0408 12:46:39.265203  433557 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:39.266971  433557 out.go:177] * Restarting existing kvm2 VM for "no-preload-135234" ...
	I0408 12:46:39.268140  433557 main.go:141] libmachine: (no-preload-135234) Calling .Start
	I0408 12:46:39.268315  433557 main.go:141] libmachine: (no-preload-135234) Ensuring networks are active...
	I0408 12:46:39.269323  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network default is active
	I0408 12:46:39.269669  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network mk-no-preload-135234 is active
	I0408 12:46:39.270047  433557 main.go:141] libmachine: (no-preload-135234) Getting domain xml...
	I0408 12:46:39.270763  433557 main.go:141] libmachine: (no-preload-135234) Creating domain...
	I0408 12:46:40.496145  433557 main.go:141] libmachine: (no-preload-135234) Waiting to get IP...
	I0408 12:46:40.497357  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.497870  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.497950  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.497853  434768 retry.go:31] will retry after 305.764185ms: waiting for machine to come up
	I0408 12:46:40.805894  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.806351  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.806380  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.806304  434768 retry.go:31] will retry after 359.02584ms: waiting for machine to come up
	I0408 12:46:39.242442  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:39.242498  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.242871  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:46:39.242935  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.243206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:46:39.245063  433439 machine.go:97] duration metric: took 4m37.367683512s to provisionDockerMachine
	I0408 12:46:39.245112  433439 fix.go:56] duration metric: took 4m37.391017413s for fixHost
	I0408 12:46:39.245118  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 4m37.391040241s
	W0408 12:46:39.245140  433439 start.go:713] error starting host: provision: host is not running
	W0408 12:46:39.245388  433439 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0408 12:46:39.245401  433439 start.go:728] Will try again in 5 seconds ...
	I0408 12:46:41.167272  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.167748  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.167779  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.167702  434768 retry.go:31] will retry after 412.762727ms: waiting for machine to come up
	I0408 12:46:41.582454  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.582959  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.582990  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.582904  434768 retry.go:31] will retry after 572.486121ms: waiting for machine to come up
	I0408 12:46:42.156830  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.157270  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.157294  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.157243  434768 retry.go:31] will retry after 706.130574ms: waiting for machine to come up
	I0408 12:46:42.865325  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.865829  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.865863  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.865762  434768 retry.go:31] will retry after 901.114252ms: waiting for machine to come up
	I0408 12:46:43.768578  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:43.769067  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:43.769103  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:43.769032  434768 retry.go:31] will retry after 1.160836088s: waiting for machine to come up
	I0408 12:46:44.931002  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:44.931408  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:44.931438  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:44.931349  434768 retry.go:31] will retry after 998.940623ms: waiting for machine to come up
	I0408 12:46:44.247774  433439 start.go:360] acquireMachinesLock for default-k8s-diff-port-527454: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:46:45.931728  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:45.932157  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:45.932241  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:45.932115  434768 retry.go:31] will retry after 1.43975568s: waiting for machine to come up
	I0408 12:46:47.373294  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:47.373786  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:47.373821  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:47.373733  434768 retry.go:31] will retry after 1.828434336s: waiting for machine to come up
	I0408 12:46:49.205019  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:49.205414  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:49.205462  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:49.205376  434768 retry.go:31] will retry after 2.847051956s: waiting for machine to come up
	I0408 12:46:52.055004  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:52.055561  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:52.055586  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:52.055517  434768 retry.go:31] will retry after 2.941262871s: waiting for machine to come up
	I0408 12:46:54.998158  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:54.998598  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:54.998631  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:54.998542  434768 retry.go:31] will retry after 3.082026915s: waiting for machine to come up
	I0408 12:46:59.561049  433674 start.go:364] duration metric: took 4m43.922045129s to acquireMachinesLock for "embed-certs-488947"
	I0408 12:46:59.561130  433674 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:59.561140  433674 fix.go:54] fixHost starting: 
	I0408 12:46:59.561636  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:59.561683  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:59.578117  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I0408 12:46:59.578573  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:59.579047  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:46:59.579074  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:59.579432  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:59.579633  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:46:59.579852  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:46:59.581445  433674 fix.go:112] recreateIfNeeded on embed-certs-488947: state=Stopped err=<nil>
	I0408 12:46:59.581492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	W0408 12:46:59.581667  433674 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:59.584306  433674 out.go:177] * Restarting existing kvm2 VM for "embed-certs-488947" ...
	I0408 12:46:59.585750  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Start
	I0408 12:46:59.585971  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring networks are active...
	I0408 12:46:59.586749  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network default is active
	I0408 12:46:59.587136  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network mk-embed-certs-488947 is active
	I0408 12:46:59.587551  433674 main.go:141] libmachine: (embed-certs-488947) Getting domain xml...
	I0408 12:46:59.588302  433674 main.go:141] libmachine: (embed-certs-488947) Creating domain...
	I0408 12:46:58.084025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084608  433557 main.go:141] libmachine: (no-preload-135234) Found IP for machine: 192.168.61.48
	I0408 12:46:58.084660  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has current primary IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084668  433557 main.go:141] libmachine: (no-preload-135234) Reserving static IP address...
	I0408 12:46:58.085160  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.085198  433557 main.go:141] libmachine: (no-preload-135234) Reserved static IP address: 192.168.61.48
	I0408 12:46:58.085213  433557 main.go:141] libmachine: (no-preload-135234) DBG | skip adding static IP to network mk-no-preload-135234 - found existing host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"}
	I0408 12:46:58.085229  433557 main.go:141] libmachine: (no-preload-135234) DBG | Getting to WaitForSSH function...
	I0408 12:46:58.085240  433557 main.go:141] libmachine: (no-preload-135234) Waiting for SSH to be available...
	I0408 12:46:58.087595  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.087990  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.088025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.088155  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH client type: external
	I0408 12:46:58.088178  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa (-rw-------)
	I0408 12:46:58.088210  433557 main.go:141] libmachine: (no-preload-135234) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:46:58.088228  433557 main.go:141] libmachine: (no-preload-135234) DBG | About to run SSH command:
	I0408 12:46:58.088241  433557 main.go:141] libmachine: (no-preload-135234) DBG | exit 0
	I0408 12:46:58.220043  433557 main.go:141] libmachine: (no-preload-135234) DBG | SSH cmd err, output: <nil>: 
	I0408 12:46:58.220440  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetConfigRaw
	I0408 12:46:58.221216  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.223881  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224184  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.224202  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224597  433557 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/config.json ...
	I0408 12:46:58.224804  433557 machine.go:94] provisionDockerMachine start ...
	I0408 12:46:58.224828  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:58.225070  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.227668  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228048  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.228080  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228242  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.228438  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228647  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228780  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.228941  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.229238  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.229253  433557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:46:58.344562  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:46:58.344602  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.344888  433557 buildroot.go:166] provisioning hostname "no-preload-135234"
	I0408 12:46:58.344922  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.345147  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.347895  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348278  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.348311  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348433  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.348638  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348801  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348911  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.349077  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.349289  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.349303  433557 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-135234 && echo "no-preload-135234" | sudo tee /etc/hostname
	I0408 12:46:58.478959  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-135234
	
	I0408 12:46:58.478996  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.481692  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482164  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.482187  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482410  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.482643  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.482851  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.483032  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.483230  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.483446  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.483465  433557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-135234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-135234/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-135234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:46:58.606022  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:58.606059  433557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:46:58.606080  433557 buildroot.go:174] setting up certificates
	I0408 12:46:58.606092  433557 provision.go:84] configureAuth start
	I0408 12:46:58.606108  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.606465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.609605  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610046  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.610079  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.612452  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612756  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.612784  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612905  433557 provision.go:143] copyHostCerts
	I0408 12:46:58.612974  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:46:58.613029  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:46:58.613097  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:46:58.613200  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:46:58.613209  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:46:58.613232  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:46:58.613295  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:46:58.613302  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:46:58.613323  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:46:58.613438  433557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.no-preload-135234 san=[127.0.0.1 192.168.61.48 localhost minikube no-preload-135234]
	I0408 12:46:58.832264  433557 provision.go:177] copyRemoteCerts
	I0408 12:46:58.832335  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:46:58.832382  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.835259  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835609  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.835650  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835883  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.836158  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.836332  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.836468  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:58.922968  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:46:58.949601  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 12:46:58.976832  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:46:59.004643  433557 provision.go:87] duration metric: took 398.533019ms to configureAuth
	I0408 12:46:59.004683  433557 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:46:59.004885  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:46:59.004988  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.008264  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008735  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.008783  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008987  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.009238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009416  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009542  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.009680  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.009866  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.009884  433557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:46:59.299880  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:46:59.299912  433557 machine.go:97] duration metric: took 1.075094362s to provisionDockerMachine
	I0408 12:46:59.299925  433557 start.go:293] postStartSetup for "no-preload-135234" (driver="kvm2")
	I0408 12:46:59.299940  433557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:46:59.299981  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.300373  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:46:59.300406  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.303274  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303769  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.303806  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303941  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.304222  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.304575  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.304874  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.395808  433557 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:46:59.400795  433557 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:46:59.400831  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:46:59.400914  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:46:59.401021  433557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:46:59.401162  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:46:59.411883  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:46:59.438486  433557 start.go:296] duration metric: took 138.54299ms for postStartSetup
	I0408 12:46:59.438546  433557 fix.go:56] duration metric: took 20.19323532s for fixHost
	I0408 12:46:59.438577  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.441875  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442334  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.442366  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442528  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.442753  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.442969  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.443101  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.443232  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.443414  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.443424  433557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:46:59.560853  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580419.531854515
	
	I0408 12:46:59.560881  433557 fix.go:216] guest clock: 1712580419.531854515
	I0408 12:46:59.560891  433557 fix.go:229] Guest: 2024-04-08 12:46:59.531854515 +0000 UTC Remote: 2024-04-08 12:46:59.438552641 +0000 UTC m=+293.653384531 (delta=93.301874ms)
	I0408 12:46:59.560918  433557 fix.go:200] guest clock delta is within tolerance: 93.301874ms
	I0408 12:46:59.560929  433557 start.go:83] releasing machines lock for "no-preload-135234", held for 20.315655744s
	I0408 12:46:59.560965  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.561244  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:59.564248  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564623  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.564658  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564758  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565245  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565434  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565524  433557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:46:59.565571  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.565726  433557 ssh_runner.go:195] Run: cat /version.json
	I0408 12:46:59.565752  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.568339  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568729  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568766  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.568789  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568931  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569139  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569201  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.569227  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.569300  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569392  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569486  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569647  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.569782  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569900  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.689264  433557 ssh_runner.go:195] Run: systemctl --version
	I0408 12:46:59.695704  433557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:46:59.848323  433557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:46:59.856068  433557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:46:59.856171  433557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:46:59.877460  433557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:46:59.877490  433557 start.go:494] detecting cgroup driver to use...
	I0408 12:46:59.877557  433557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:46:59.895329  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:46:59.910849  433557 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:46:59.910908  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:46:59.925541  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:46:59.941511  433557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:00.064454  433557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:00.218535  433557 docker.go:233] disabling docker service ...
	I0408 12:47:00.218614  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:00.234510  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:00.249703  433557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:00.403556  433557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:00.569324  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:00.585058  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:00.607536  433557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:00.607592  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.624701  433557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:00.624774  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.637414  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.649846  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.662725  433557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:00.675738  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.688667  433557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.710326  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.722619  433557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:00.734130  433557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:00.734227  433557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:00.749998  433557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:00.761556  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:00.881544  433557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:01.036952  433557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:01.037040  433557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:01.042260  433557 start.go:562] Will wait 60s for crictl version
	I0408 12:47:01.042329  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.046327  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:01.092359  433557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:01.092465  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.127373  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.165027  433557 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0408 12:47:00.888196  433674 main.go:141] libmachine: (embed-certs-488947) Waiting to get IP...
	I0408 12:47:00.889196  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:00.889766  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:00.889808  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:00.889702  434916 retry.go:31] will retry after 239.282192ms: waiting for machine to come up
	I0408 12:47:01.130508  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.131075  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.131111  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.131016  434916 retry.go:31] will retry after 388.837258ms: waiting for machine to come up
	I0408 12:47:01.522006  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.522413  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.522444  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.522364  434916 retry.go:31] will retry after 372.310428ms: waiting for machine to come up
	I0408 12:47:01.896325  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.896919  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.896954  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.896851  434916 retry.go:31] will retry after 574.930775ms: waiting for machine to come up
	I0408 12:47:02.474045  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.474626  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.474664  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.474557  434916 retry.go:31] will retry after 506.414729ms: waiting for machine to come up
	I0408 12:47:02.982589  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.983203  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.983238  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.983135  434916 retry.go:31] will retry after 614.351996ms: waiting for machine to come up
	I0408 12:47:03.599165  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:03.599682  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:03.599724  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:03.599640  434916 retry.go:31] will retry after 1.130025801s: waiting for machine to come up
	I0408 12:47:04.731350  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:04.731841  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:04.731874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:04.731791  434916 retry.go:31] will retry after 1.346613974s: waiting for machine to come up
	I0408 12:47:01.166849  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:47:01.169772  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170183  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:01.170211  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170523  433557 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:01.175336  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:01.193759  433557 kubeadm.go:877] updating cluster {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:01.193949  433557 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 12:47:01.194017  433557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:01.234439  433557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0408 12:47:01.234466  433557 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:01.234547  433557 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.234575  433557 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.234589  433557 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.234625  433557 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0408 12:47:01.234576  433557 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.234562  433557 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.234696  433557 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.234554  433557 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.236654  433557 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.236678  433557 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.236701  433557 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0408 12:47:01.236686  433557 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.236630  433557 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236789  433557 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.475737  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.476344  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.482596  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.486680  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.490012  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.496685  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0408 12:47:01.510269  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.597119  433557 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0408 12:47:01.597179  433557 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.597238  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696018  433557 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0408 12:47:01.696123  433557 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.696148  433557 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0408 12:47:01.696196  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696201  433557 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.696237  433557 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0408 12:47:01.696254  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696265  433557 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.696299  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.710260  433557 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0408 12:47:01.710317  433557 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.710369  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799524  433557 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0408 12:47:01.799583  433557 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.799592  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.799616  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.799626  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.799618  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799679  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.799734  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.916654  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0408 12:47:01.916701  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.916783  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:01.916809  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.923863  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.923904  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.923974  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.924021  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.924065  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924176  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924067  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.926651  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0408 12:47:01.926681  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926722  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926783  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0408 12:47:01.974801  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0408 12:47:01.974875  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974939  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:01.974969  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974944  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0408 12:47:02.062944  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.916991  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.990237597s)
	I0408 12:47:04.917016  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.942055075s)
	I0408 12:47:04.917036  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0408 12:47:04.917040  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0408 12:47:04.917047  433557 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917098  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917117  433557 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.854126587s)
	I0408 12:47:04.917187  433557 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0408 12:47:04.917233  433557 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.917278  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:06.080429  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:06.080910  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:06.080942  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:06.080866  434916 retry.go:31] will retry after 1.125692215s: waiting for machine to come up
	I0408 12:47:07.208553  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:07.209015  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:07.209040  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:07.208961  434916 retry.go:31] will retry after 1.958080491s: waiting for machine to come up
	I0408 12:47:09.169878  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:09.170289  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:09.170319  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:09.170243  434916 retry.go:31] will retry after 2.241966019s: waiting for machine to come up
	I0408 12:47:08.833969  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.916836964s)
	I0408 12:47:08.834011  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0408 12:47:08.834029  433557 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834032  433557 ssh_runner.go:235] Completed: which crictl: (3.916731005s)
	I0408 12:47:08.834085  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834101  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:11.414435  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:11.414829  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:11.414851  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:11.414786  434916 retry.go:31] will retry after 2.815941766s: waiting for machine to come up
	I0408 12:47:14.233868  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:14.234272  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:14.234318  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:14.234228  434916 retry.go:31] will retry after 3.213192238s: waiting for machine to come up
	I0408 12:47:10.925471  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091353526s)
	I0408 12:47:10.925519  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0408 12:47:10.925542  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925581  433557 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.091434251s)
	I0408 12:47:10.925612  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925673  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 12:47:10.925782  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:12.405175  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.479529413s)
	I0408 12:47:12.405221  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0408 12:47:12.405238  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:12.405236  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.479424271s)
	I0408 12:47:12.405270  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0408 12:47:12.405296  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:14.283021  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (1.877693108s)
	I0408 12:47:14.283061  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0408 12:47:14.283079  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:14.283143  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:18.781552  433881 start.go:364] duration metric: took 4m47.057472647s to acquireMachinesLock for "old-k8s-version-384148"
	I0408 12:47:18.781636  433881 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:18.781645  433881 fix.go:54] fixHost starting: 
	I0408 12:47:18.782123  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:18.782168  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:18.804263  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0408 12:47:18.804759  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:18.805376  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:47:18.805407  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:18.805815  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:18.806091  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:18.806265  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetState
	I0408 12:47:18.809884  433881 fix.go:112] recreateIfNeeded on old-k8s-version-384148: state=Stopped err=<nil>
	I0408 12:47:18.809915  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	W0408 12:47:18.810103  433881 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:18.812906  433881 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-384148" ...
	I0408 12:47:17.451190  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451657  433674 main.go:141] libmachine: (embed-certs-488947) Found IP for machine: 192.168.72.159
	I0408 12:47:17.451705  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has current primary IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451725  433674 main.go:141] libmachine: (embed-certs-488947) Reserving static IP address...
	I0408 12:47:17.452192  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.452239  433674 main.go:141] libmachine: (embed-certs-488947) Reserved static IP address: 192.168.72.159
	I0408 12:47:17.452259  433674 main.go:141] libmachine: (embed-certs-488947) DBG | skip adding static IP to network mk-embed-certs-488947 - found existing host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"}
	I0408 12:47:17.452282  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Getting to WaitForSSH function...
	I0408 12:47:17.452297  433674 main.go:141] libmachine: (embed-certs-488947) Waiting for SSH to be available...
	I0408 12:47:17.454780  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455169  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.455208  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH client type: external
	I0408 12:47:17.455354  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa (-rw-------)
	I0408 12:47:17.455384  433674 main.go:141] libmachine: (embed-certs-488947) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:17.455401  433674 main.go:141] libmachine: (embed-certs-488947) DBG | About to run SSH command:
	I0408 12:47:17.455414  433674 main.go:141] libmachine: (embed-certs-488947) DBG | exit 0
	I0408 12:47:17.585037  433674 main.go:141] libmachine: (embed-certs-488947) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:17.585443  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetConfigRaw
	I0408 12:47:17.586184  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.589492  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.589953  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.589985  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.590269  433674 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/config.json ...
	I0408 12:47:17.590518  433674 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:17.590550  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:17.590798  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.593968  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594570  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.594615  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594832  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.595073  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595236  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595442  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.595661  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.595892  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.595905  433674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:17.708468  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:17.708504  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.708857  433674 buildroot.go:166] provisioning hostname "embed-certs-488947"
	I0408 12:47:17.708890  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.709083  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.712242  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712698  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.712732  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712928  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.713122  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713298  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713433  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.713612  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.713801  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.713817  433674 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-488947 && echo "embed-certs-488947" | sudo tee /etc/hostname
	I0408 12:47:17.842964  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-488947
	
	I0408 12:47:17.843017  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.846436  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.846959  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.846992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.847225  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.847486  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847726  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847945  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.848182  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.848373  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.848397  433674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-488947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-488947/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-488947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:17.975087  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:17.975123  433674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:17.975178  433674 buildroot.go:174] setting up certificates
	I0408 12:47:17.975198  433674 provision.go:84] configureAuth start
	I0408 12:47:17.975212  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.975606  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.979028  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979483  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.979510  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979754  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.982474  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.982944  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.982977  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.983174  433674 provision.go:143] copyHostCerts
	I0408 12:47:17.983230  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:17.983240  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:17.983291  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:17.983408  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:17.983419  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:17.983444  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:17.983500  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:17.983507  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:17.983526  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:17.983580  433674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.embed-certs-488947 san=[127.0.0.1 192.168.72.159 embed-certs-488947 localhost minikube]
	I0408 12:47:18.043022  433674 provision.go:177] copyRemoteCerts
	I0408 12:47:18.043092  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:18.043162  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.046335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046722  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.046757  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046904  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.047145  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.047333  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.047475  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.134761  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:18.163745  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 12:47:18.192946  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:18.220790  433674 provision.go:87] duration metric: took 245.573885ms to configureAuth
	I0408 12:47:18.220827  433674 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:18.221067  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:47:18.221175  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.224177  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.224805  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.224839  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.225098  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.225363  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225569  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225797  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.226024  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.226202  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.226219  433674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:18.522682  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:18.522718  433674 machine.go:97] duration metric: took 932.18024ms to provisionDockerMachine
	I0408 12:47:18.522735  433674 start.go:293] postStartSetup for "embed-certs-488947" (driver="kvm2")
	I0408 12:47:18.522750  433674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:18.522776  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.523133  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:18.523174  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.526523  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.526872  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.526903  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.527101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.527336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.527512  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.527692  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.615353  433674 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:18.620414  433674 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:18.620447  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:18.620525  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:18.620627  433674 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:18.620726  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:18.630585  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:18.658952  433674 start.go:296] duration metric: took 136.200863ms for postStartSetup
	I0408 12:47:18.659004  433674 fix.go:56] duration metric: took 19.097863992s for fixHost
	I0408 12:47:18.659037  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.662115  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662571  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.662606  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662843  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.663100  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663308  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663480  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.663676  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.663919  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.663939  433674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:18.781355  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580438.730334929
	
	I0408 12:47:18.781402  433674 fix.go:216] guest clock: 1712580438.730334929
	I0408 12:47:18.781427  433674 fix.go:229] Guest: 2024-04-08 12:47:18.730334929 +0000 UTC Remote: 2024-04-08 12:47:18.659010209 +0000 UTC m=+303.178294166 (delta=71.32472ms)
	I0408 12:47:18.781457  433674 fix.go:200] guest clock delta is within tolerance: 71.32472ms
	I0408 12:47:18.781465  433674 start.go:83] releasing machines lock for "embed-certs-488947", held for 19.22036189s
	I0408 12:47:18.781502  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.781800  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:18.784825  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785270  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.785313  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786104  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786456  433674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:18.786501  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.786626  433674 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:18.786660  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.789409  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789704  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790019  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790149  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790306  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790322  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790338  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790495  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790528  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790745  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.790867  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790997  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.911025  433674 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:18.917785  433674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:19.070383  433674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:19.077521  433674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:19.077606  433674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:19.094598  433674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:19.094636  433674 start.go:494] detecting cgroup driver to use...
	I0408 12:47:19.094750  433674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:19.111163  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:19.125621  433674 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:19.125688  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:19.141948  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:19.156671  433674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:19.281688  433674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:19.455445  433674 docker.go:233] disabling docker service ...
	I0408 12:47:19.455519  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:19.474594  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:19.491301  433674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:19.646063  433674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:19.786075  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:19.803535  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:19.829204  433674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:19.829282  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.842132  433674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:19.842201  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.853915  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.866449  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.879235  433674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:19.899411  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.920363  433674 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.946414  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.958824  433674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:19.969691  433674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:19.969754  433674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:19.986458  433674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:19.998655  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:20.157494  433674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:20.318209  433674 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:20.318287  433674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:20.325414  433674 start.go:562] Will wait 60s for crictl version
	I0408 12:47:20.325490  433674 ssh_runner.go:195] Run: which crictl
	I0408 12:47:20.330070  433674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:20.383808  433674 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:20.383959  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.417705  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.454321  433674 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:47:20.456101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:20.460035  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.460734  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:20.460774  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.461140  433674 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:20.467650  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:20.486936  433674 kubeadm.go:877] updating cluster {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:20.487105  433674 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:47:20.487176  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:20.529152  433674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:47:20.529293  433674 ssh_runner.go:195] Run: which lz4
	I0408 12:47:16.552712  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.26954566s)
	I0408 12:47:16.552781  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0408 12:47:16.552797  433557 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:16.552839  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:17.512103  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 12:47:17.512151  433557 cache_images.go:123] Successfully loaded all cached images
	I0408 12:47:17.512158  433557 cache_images.go:92] duration metric: took 16.277680364s to LoadCachedImages
	I0408 12:47:17.512171  433557 kubeadm.go:928] updating node { 192.168.61.48 8443 v1.30.0-rc.0 crio true true} ...
	I0408 12:47:17.512324  433557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-135234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:17.512440  433557 ssh_runner.go:195] Run: crio config
	I0408 12:47:17.561382  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:17.561424  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:17.561441  433557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:17.561472  433557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-135234 NodeName:no-preload-135234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:17.561681  433557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-135234"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:17.561807  433557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0408 12:47:17.574237  433557 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:17.574321  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:17.587129  433557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0408 12:47:17.609022  433557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0408 12:47:17.629656  433557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0408 12:47:17.650373  433557 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:17.655031  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:17.670872  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:17.811548  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:17.830945  433557 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234 for IP: 192.168.61.48
	I0408 12:47:17.830974  433557 certs.go:194] generating shared ca certs ...
	I0408 12:47:17.831000  433557 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:17.831219  433557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:17.831277  433557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:17.831290  433557 certs.go:256] generating profile certs ...
	I0408 12:47:17.831453  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/client.key
	I0408 12:47:17.831521  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key.dbd08c09
	I0408 12:47:17.831577  433557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key
	I0408 12:47:17.831823  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:17.831891  433557 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:17.831906  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:17.831946  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:17.831978  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:17.832007  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:17.832059  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:17.832899  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:17.869894  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:17.902893  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:17.943547  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:17.990462  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 12:47:18.026697  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:18.055643  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:18.083357  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:47:18.109247  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:18.134513  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:18.161811  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:18.189968  433557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:18.210173  433557 ssh_runner.go:195] Run: openssl version
	I0408 12:47:18.216813  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:18.230693  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236461  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236526  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.244183  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:18.257589  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:18.271235  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277004  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277088  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.283549  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:18.296789  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:18.309587  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314537  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314608  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.320942  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:18.333407  433557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:18.338637  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:18.345365  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:18.352262  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:18.359464  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:18.366233  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:18.373280  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:18.380134  433557 kubeadm.go:391] StartCluster: {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:18.380291  433557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:18.380403  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.423068  433557 cri.go:89] found id: ""
	I0408 12:47:18.423164  433557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:18.435458  433557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:18.435497  433557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:18.435503  433557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:18.435562  433557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:18.447509  433557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:18.448720  433557 kubeconfig.go:125] found "no-preload-135234" server: "https://192.168.61.48:8443"
	I0408 12:47:18.451154  433557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:18.463246  433557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.48
	I0408 12:47:18.463299  433557 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:18.463315  433557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:18.463394  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.522929  433557 cri.go:89] found id: ""
	I0408 12:47:18.523011  433557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:18.546346  433557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:18.558613  433557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:18.558640  433557 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:18.558714  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:18.570020  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:18.570106  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:18.581323  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:18.593718  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:18.593778  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:18.606889  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.619251  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:18.619320  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.632343  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:18.644913  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:18.645004  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:18.656965  433557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:18.670774  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:18.785507  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:19.988135  433557 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.202584017s)
	I0408 12:47:19.988174  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.235430  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.316709  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.456307  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:20.456393  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:18.814842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .Start
	I0408 12:47:18.815096  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring networks are active...
	I0408 12:47:18.816155  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network default is active
	I0408 12:47:18.816608  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network mk-old-k8s-version-384148 is active
	I0408 12:47:18.817061  433881 main.go:141] libmachine: (old-k8s-version-384148) Getting domain xml...
	I0408 12:47:18.817951  433881 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:47:20.144750  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting to get IP...
	I0408 12:47:20.145850  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.146334  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.146403  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.146320  435057 retry.go:31] will retry after 230.92081ms: waiting for machine to come up
	I0408 12:47:20.378905  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.379518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.379572  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.379474  435057 retry.go:31] will retry after 383.208004ms: waiting for machine to come up
	I0408 12:47:20.764287  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.764883  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.764936  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.764858  435057 retry.go:31] will retry after 430.674899ms: waiting for machine to come up
	I0408 12:47:21.197738  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.198231  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.198255  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.198190  435057 retry.go:31] will retry after 553.905508ms: waiting for machine to come up
	I0408 12:47:20.534154  433674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:20.538991  433674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:20.539034  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:47:22.249270  433674 crio.go:462] duration metric: took 1.715182486s to copy over tarball
	I0408 12:47:22.249391  433674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:24.966695  433674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.717265287s)
	I0408 12:47:24.966730  433674 crio.go:469] duration metric: took 2.717416948s to extract the tarball
	I0408 12:47:24.966740  433674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:25.007656  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:25.063445  433674 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:47:25.063482  433674 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:47:25.063494  433674 kubeadm.go:928] updating node { 192.168.72.159 8443 v1.29.3 crio true true} ...
	I0408 12:47:25.063627  433674 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-488947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:25.063745  433674 ssh_runner.go:195] Run: crio config
	I0408 12:47:25.122219  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:25.122282  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:25.122298  433674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:25.122330  433674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-488947 NodeName:embed-certs-488947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:25.122556  433674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-488947"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:25.122633  433674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:47:25.137001  433674 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:25.137148  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:25.151168  433674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0408 12:47:25.171698  433674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:25.195101  433674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0408 12:47:25.216873  433674 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:25.221155  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:25.235740  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:25.354135  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:25.377763  433674 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947 for IP: 192.168.72.159
	I0408 12:47:25.377801  433674 certs.go:194] generating shared ca certs ...
	I0408 12:47:25.377824  433674 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:25.378055  433674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:25.378137  433674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:25.378161  433674 certs.go:256] generating profile certs ...
	I0408 12:47:25.378299  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/client.key
	I0408 12:47:25.378391  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key.21d2a89c
	I0408 12:47:25.378460  433674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key
	I0408 12:47:25.378628  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:25.378687  433674 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:25.378702  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:25.378736  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:25.378780  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:25.378818  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:25.378888  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:25.379800  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:25.422370  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:25.468967  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:25.516750  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:20.956916  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.456948  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.957498  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.982763  433557 api_server.go:72] duration metric: took 1.526450888s to wait for apiserver process to appear ...
	I0408 12:47:21.982797  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:21.982852  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.363696  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.363732  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.363758  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.398003  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.398065  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.483280  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:21.754065  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.754814  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.754849  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.754719  435057 retry.go:31] will retry after 678.896106ms: waiting for machine to come up
	I0408 12:47:22.435899  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:22.436481  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:22.436518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:22.436426  435057 retry.go:31] will retry after 624.721191ms: waiting for machine to come up
	I0408 12:47:23.063619  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:23.064268  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:23.064290  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:23.064183  435057 retry.go:31] will retry after 1.072067437s: waiting for machine to come up
	I0408 12:47:24.137999  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:24.138573  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:24.138607  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:24.138517  435057 retry.go:31] will retry after 1.238721936s: waiting for machine to come up
	I0408 12:47:25.378512  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:25.378929  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:25.378956  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:25.378819  435057 retry.go:31] will retry after 1.314708825s: waiting for machine to come up
	I0408 12:47:26.461241  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.461305  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.461321  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.482518  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.482566  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.483554  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.497035  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.497075  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.983270  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.996515  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.996556  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.483125  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.491506  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.491549  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.983839  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.991044  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.991090  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.483669  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.490665  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:28.490703  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.983248  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.998278  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:47:29.007388  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:47:29.007429  433557 api_server.go:131] duration metric: took 7.024624495s to wait for apiserver health ...
	I0408 12:47:29.007444  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:29.007452  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:29.009506  433557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:25.561601  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0408 12:47:26.087896  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:26.116559  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:26.145651  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:26.174910  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:26.206627  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:26.238398  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:26.281684  433674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:26.306417  433674 ssh_runner.go:195] Run: openssl version
	I0408 12:47:26.313279  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:26.328106  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333727  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333810  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.340200  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:26.352316  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:26.364788  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.369928  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.370003  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.376525  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:26.388232  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:26.400301  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405327  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405407  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.411586  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:26.423764  433674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:26.428995  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:26.435932  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:26.442742  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:26.451458  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:26.458715  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:26.466424  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:26.473948  433674 kubeadm.go:391] StartCluster: {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:26.474083  433674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:26.474158  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.515603  433674 cri.go:89] found id: ""
	I0408 12:47:26.515676  433674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:26.526818  433674 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:26.526845  433674 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:26.526851  433674 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:26.526908  433674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:26.537675  433674 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:26.538807  433674 kubeconfig.go:125] found "embed-certs-488947" server: "https://192.168.72.159:8443"
	I0408 12:47:26.540848  433674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:26.551278  433674 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.159
	I0408 12:47:26.551317  433674 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:26.551330  433674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:26.551406  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.591372  433674 cri.go:89] found id: ""
	I0408 12:47:26.591478  433674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:26.610486  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:26.621770  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:26.621794  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:26.621869  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:26.632480  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:26.632554  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:26.645878  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:26.659969  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:26.660068  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:26.670611  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.680945  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:26.681034  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.692201  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:26.703049  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:26.703126  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:26.715887  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:26.727464  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:26.956245  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.722655  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.973294  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.086774  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.203640  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:28.203755  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:28.704550  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.203852  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.704305  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.724333  433674 api_server.go:72] duration metric: took 1.520681062s to wait for apiserver process to appear ...
	I0408 12:47:29.724372  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:29.724402  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:29.010843  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:29.029631  433557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:29.052609  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:29.069954  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:29.070010  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:29.070022  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:29.070034  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:29.070043  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:29.070049  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:47:29.070076  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:29.070087  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:29.070098  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:47:29.070107  433557 system_pods.go:74] duration metric: took 17.469317ms to wait for pod list to return data ...
	I0408 12:47:29.070117  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:29.075401  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:29.075443  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:29.075459  433557 node_conditions.go:105] duration metric: took 5.335891ms to run NodePressure ...
	I0408 12:47:29.075489  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:29.403218  433557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409235  433557 kubeadm.go:733] kubelet initialised
	I0408 12:47:29.409263  433557 kubeadm.go:734] duration metric: took 6.014758ms waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409276  433557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:29.418787  433557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.441264  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441310  433557 pod_ready.go:81] duration metric: took 22.478832ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.441325  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441336  433557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.461805  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461916  433557 pod_ready.go:81] duration metric: took 20.564997ms for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.461945  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461982  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.475160  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475198  433557 pod_ready.go:81] duration metric: took 13.191566ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.475229  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475241  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.486266  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486306  433557 pod_ready.go:81] duration metric: took 11.046794ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.486321  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486331  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.857658  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857703  433557 pod_ready.go:81] duration metric: took 371.357848ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.857717  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857725  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.258154  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258194  433557 pod_ready.go:81] duration metric: took 400.459219ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.258208  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258230  433557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.656845  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656890  433557 pod_ready.go:81] duration metric: took 398.64565ms for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.656904  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656915  433557 pod_ready.go:38] duration metric: took 1.247627349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:30.656947  433557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:47:30.683024  433557 ops.go:34] apiserver oom_adj: -16
	I0408 12:47:30.683055  433557 kubeadm.go:591] duration metric: took 12.247545723s to restartPrimaryControlPlane
	I0408 12:47:30.683067  433557 kubeadm.go:393] duration metric: took 12.302946s to StartCluster
	I0408 12:47:30.683095  433557 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.683214  433557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:30.685507  433557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.685852  433557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:47:30.687967  433557 out.go:177] * Verifying Kubernetes components...
	I0408 12:47:30.685951  433557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:47:30.686122  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:47:30.689462  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:30.689475  433557 addons.go:69] Setting storage-provisioner=true in profile "no-preload-135234"
	I0408 12:47:30.689511  433557 addons.go:234] Setting addon storage-provisioner=true in "no-preload-135234"
	W0408 12:47:30.689521  433557 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:47:30.689555  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.689573  433557 addons.go:69] Setting default-storageclass=true in profile "no-preload-135234"
	I0408 12:47:30.689620  433557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-135234"
	I0408 12:47:30.689956  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.689995  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.689996  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690026  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.690085  433557 addons.go:69] Setting metrics-server=true in profile "no-preload-135234"
	I0408 12:47:30.690135  433557 addons.go:234] Setting addon metrics-server=true in "no-preload-135234"
	W0408 12:47:30.690146  433557 addons.go:243] addon metrics-server should already be in state true
	I0408 12:47:30.690186  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.690614  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690692  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.710746  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0408 12:47:30.710947  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0408 12:47:30.711153  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0408 12:47:30.711301  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711752  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711839  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.712010  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712027  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712564  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.712757  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712780  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712911  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712926  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.713381  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.713427  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.713660  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714094  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714304  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.714365  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.714401  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.717892  433557 addons.go:234] Setting addon default-storageclass=true in "no-preload-135234"
	W0408 12:47:30.717959  433557 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:47:30.718004  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.718497  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.718577  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.734825  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0408 12:47:30.736890  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I0408 12:47:30.756599  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.756681  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.757290  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757312  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757318  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757332  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757774  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.757849  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.758015  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.758082  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.760658  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.760732  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.762999  433557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:47:30.764689  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:47:30.764714  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:47:30.766392  433557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:30.764741  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.767890  433557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:30.767911  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:47:30.767933  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.772580  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.772714  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773015  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773038  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773423  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773449  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773462  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773663  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773875  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.773897  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.774038  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774074  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774163  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.774227  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.779694  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0408 12:47:30.780190  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.780772  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.780793  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.781114  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.781773  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.781821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.803661  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0408 12:47:30.804212  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.804828  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.804847  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.805397  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.805713  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.807761  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.808244  433557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:30.808269  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:47:30.808288  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.811598  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812078  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.812109  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812264  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.812465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.812702  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.812868  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:26.695466  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:26.835234  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:26.835265  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:26.695884  435057 retry.go:31] will retry after 1.93787314s: waiting for machine to come up
	I0408 12:47:28.635479  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:28.636019  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:28.636052  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:28.635935  435057 retry.go:31] will retry after 1.906126524s: waiting for machine to come up
	I0408 12:47:30.544699  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:30.545145  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:30.545165  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:30.545084  435057 retry.go:31] will retry after 3.291404288s: waiting for machine to come up
	I0408 12:47:30.979880  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:31.004961  433557 node_ready.go:35] waiting up to 6m0s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:31.088114  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:31.110971  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:47:31.111017  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:47:31.150193  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:47:31.150229  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:47:31.184811  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.184899  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:47:31.214364  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.244802  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:32.406228  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.318067686s)
	I0408 12:47:32.406305  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406317  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.406830  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.406897  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.406913  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406921  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.407242  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407275  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407319  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.407329  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.532524  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.318098791s)
	I0408 12:47:32.532662  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532694  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.532576  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.287674494s)
	I0408 12:47:32.532774  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532799  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533022  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533041  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533052  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533060  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533223  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533280  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533286  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533294  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533301  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533457  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533516  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533539  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533546  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.534974  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.534991  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.535019  433557 addons.go:470] Verifying addon metrics-server=true in "no-preload-135234"
	I0408 12:47:32.543151  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.543183  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.543549  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.543571  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.546033  433557 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0408 12:47:32.894282  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:32.894320  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:32.894336  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:32.988397  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:32.988442  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.224790  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.232146  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.232176  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.724683  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.729479  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.729520  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:34.224919  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:34.230233  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:47:34.247835  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:47:34.247872  433674 api_server.go:131] duration metric: took 4.523492127s to wait for apiserver health ...
	I0408 12:47:34.247883  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:34.247890  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:34.249807  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:34.251603  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:34.265254  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:34.288078  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:34.301533  433674 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:34.301570  433674 system_pods.go:61] "coredns-76f75df574-hq2mm" [cfc7bd40-0b7d-4e00-ac55-b3ae796018ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:34.301577  433674 system_pods.go:61] "etcd-embed-certs-488947" [eb29ace5-8ad9-4080-a875-2eb83dcea583] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:34.301585  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [8e97033f-996a-4b64-9474-7b4d562eb1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:34.301591  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [b3db7631-d953-418e-9c72-f299d0287a2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:34.301595  433674 system_pods.go:61] "kube-proxy-2gn8m" [c31d8f0d-d6c1-4afa-b64c-7fc422d493f2] Running
	I0408 12:47:34.301600  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b9b29f85-7a75-4b09-b6cd-940ff42326d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:34.301604  433674 system_pods.go:61] "metrics-server-57f55c9bc5-z2ztl" [d9dc47ad-3370-4e55-a724-8c529c723992] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:34.301607  433674 system_pods.go:61] "storage-provisioner" [4953dc3a-31ca-464d-9530-34f488ed9a02] Running
	I0408 12:47:34.301617  433674 system_pods.go:74] duration metric: took 13.514139ms to wait for pod list to return data ...
	I0408 12:47:34.301624  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:34.305931  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:34.305962  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:34.305974  433674 node_conditions.go:105] duration metric: took 4.345624ms to run NodePressure ...
	I0408 12:47:34.305993  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:34.598392  433674 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603606  433674 kubeadm.go:733] kubelet initialised
	I0408 12:47:34.603632  433674 kubeadm.go:734] duration metric: took 5.204237ms waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603641  433674 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:34.610027  433674 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:32.547718  433557 addons.go:505] duration metric: took 1.861769291s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0408 12:47:33.008857  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:35.510251  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:33.837729  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:33.838183  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:33.838213  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:33.838133  435057 retry.go:31] will retry after 3.949072436s: waiting for machine to come up
	I0408 12:47:39.502172  433439 start.go:364] duration metric: took 55.254308447s to acquireMachinesLock for "default-k8s-diff-port-527454"
	I0408 12:47:39.502232  433439 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:39.502245  433439 fix.go:54] fixHost starting: 
	I0408 12:47:39.502725  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:39.502767  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:39.523738  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0408 12:47:39.525022  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:39.525614  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:47:39.525646  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:39.526077  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:39.526307  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:47:39.526448  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:47:39.528207  433439 fix.go:112] recreateIfNeeded on default-k8s-diff-port-527454: state=Stopped err=<nil>
	I0408 12:47:39.528241  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	W0408 12:47:39.528449  433439 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:39.530360  433439 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-527454" ...
	I0408 12:47:36.618430  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.619713  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.009213  433557 node_ready.go:49] node "no-preload-135234" has status "Ready":"True"
	I0408 12:47:38.009241  433557 node_ready.go:38] duration metric: took 7.004239102s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:38.009250  433557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:38.014665  433557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020024  433557 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:38.020054  433557 pod_ready.go:81] duration metric: took 5.358174ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020067  433557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:40.030803  433557 pod_ready.go:102] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:37.789177  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789704  433881 main.go:141] libmachine: (old-k8s-version-384148) Found IP for machine: 192.168.39.245
	I0408 12:47:37.789740  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has current primary IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789750  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserving static IP address...
	I0408 12:47:37.790172  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.790212  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | skip adding static IP to network mk-old-k8s-version-384148 - found existing host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"}
	I0408 12:47:37.790227  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserved static IP address: 192.168.39.245
	I0408 12:47:37.790244  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting for SSH to be available...
	I0408 12:47:37.790259  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Getting to WaitForSSH function...
	I0408 12:47:37.792465  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792793  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.792829  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792884  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH client type: external
	I0408 12:47:37.792932  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa (-rw-------)
	I0408 12:47:37.792974  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:37.793007  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | About to run SSH command:
	I0408 12:47:37.793018  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | exit 0
	I0408 12:47:37.920427  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:37.920854  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:47:37.921644  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:37.924168  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924631  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.924663  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924954  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:47:37.925170  433881 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:37.925191  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:37.925526  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:37.928176  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928552  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.928583  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928740  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:37.928916  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929095  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929260  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:37.929421  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:37.929626  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:37.929637  433881 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:38.044349  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:38.044378  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044695  433881 buildroot.go:166] provisioning hostname "old-k8s-version-384148"
	I0408 12:47:38.044728  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044955  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.047788  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048116  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.048149  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048291  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.048487  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.049024  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.049242  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.049258  433881 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-384148 && echo "old-k8s-version-384148" | sudo tee /etc/hostname
	I0408 12:47:38.175102  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-384148
	
	I0408 12:47:38.175132  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.178015  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178431  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.178461  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178659  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.178872  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179057  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179198  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.179347  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.179578  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.179604  433881 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-384148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-384148/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-384148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:38.306997  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:38.307037  433881 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:38.307072  433881 buildroot.go:174] setting up certificates
	I0408 12:47:38.307088  433881 provision.go:84] configureAuth start
	I0408 12:47:38.307099  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.307464  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:38.310078  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310595  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.310643  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.313155  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313521  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.313551  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313694  433881 provision.go:143] copyHostCerts
	I0408 12:47:38.313748  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:38.313768  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:38.313829  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:38.313919  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:38.313927  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:38.313945  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:38.314007  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:38.314014  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:38.314031  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:38.314080  433881 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-384148 san=[127.0.0.1 192.168.39.245 localhost minikube old-k8s-version-384148]
	I0408 12:47:38.748791  433881 provision.go:177] copyRemoteCerts
	I0408 12:47:38.748865  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:38.748895  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.752034  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752458  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.752499  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752695  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.752900  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.753075  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.753266  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:38.849144  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:47:38.880279  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:38.907293  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:38.936116  433881 provision.go:87] duration metric: took 629.014723ms to configureAuth
	I0408 12:47:38.936152  433881 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:38.936321  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:47:38.936403  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.939013  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939399  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.939457  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939593  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.939861  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940059  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940215  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.940377  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.940622  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.940648  433881 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:39.241516  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:39.241543  433881 machine.go:97] duration metric: took 1.316359736s to provisionDockerMachine
	I0408 12:47:39.241554  433881 start.go:293] postStartSetup for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:47:39.241566  433881 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:39.241585  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.241901  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:39.241935  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.244908  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245307  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.245336  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245486  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.245692  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.245890  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.246051  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.333612  433881 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:39.338826  433881 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:39.338853  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:39.338919  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:39.338988  433881 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:39.339071  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:39.352064  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:39.380881  433881 start.go:296] duration metric: took 139.30723ms for postStartSetup
	I0408 12:47:39.380939  433881 fix.go:56] duration metric: took 20.599293118s for fixHost
	I0408 12:47:39.380970  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.384147  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384556  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.384610  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384795  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.385010  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385212  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385411  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.385627  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:39.385869  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:39.385885  433881 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:39.501982  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580459.470646239
	
	I0408 12:47:39.502031  433881 fix.go:216] guest clock: 1712580459.470646239
	I0408 12:47:39.502042  433881 fix.go:229] Guest: 2024-04-08 12:47:39.470646239 +0000 UTC Remote: 2024-04-08 12:47:39.38094595 +0000 UTC m=+307.818603739 (delta=89.700289ms)
	I0408 12:47:39.502073  433881 fix.go:200] guest clock delta is within tolerance: 89.700289ms
	I0408 12:47:39.502084  433881 start.go:83] releasing machines lock for "old-k8s-version-384148", held for 20.720472846s
	I0408 12:47:39.502114  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.502407  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:39.505864  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506319  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.506352  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506704  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507318  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507574  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507677  433881 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:39.507767  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.507908  433881 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:39.507932  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.510993  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511077  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511476  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511522  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511563  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511589  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511743  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511923  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512084  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512093  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512239  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512246  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.512413  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.633304  433881 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:39.642014  433881 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:39.804068  433881 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:39.812237  433881 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:39.812324  433881 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:39.835586  433881 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:39.835621  433881 start.go:494] detecting cgroup driver to use...
	I0408 12:47:39.835721  433881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:39.860378  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:39.882019  433881 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:39.882096  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:39.898112  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:39.913562  433881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:40.047449  433881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:40.188730  433881 docker.go:233] disabling docker service ...
	I0408 12:47:40.188822  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:40.205050  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:40.222432  433881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:40.386332  433881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:40.561583  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:40.582135  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:40.611648  433881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:47:40.611751  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.629357  433881 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:40.629458  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.646030  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.661349  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.674997  433881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:40.688255  433881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:40.706703  433881 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:40.706763  433881 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:40.724839  433881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:40.738018  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:40.906300  433881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:41.073054  433881 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:41.073141  433881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:41.078610  433881 start.go:562] Will wait 60s for crictl version
	I0408 12:47:41.078679  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:41.083133  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:41.126948  433881 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:41.127101  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.160091  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.195044  433881 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:47:41.196514  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:41.199376  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.199831  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:41.199860  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.200145  433881 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:41.204867  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:41.221274  433881 kubeadm.go:877] updating cluster {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:41.221469  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:47:41.221550  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:41.275430  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:41.275531  433881 ssh_runner.go:195] Run: which lz4
	I0408 12:47:41.280606  433881 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:41.285549  433881 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:41.285606  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:47:39.531815  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Start
	I0408 12:47:39.531988  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring networks are active...
	I0408 12:47:39.532969  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network default is active
	I0408 12:47:39.533486  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network mk-default-k8s-diff-port-527454 is active
	I0408 12:47:39.533947  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Getting domain xml...
	I0408 12:47:39.534767  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Creating domain...
	I0408 12:47:40.935150  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting to get IP...
	I0408 12:47:40.936250  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:40.936778  435248 retry.go:31] will retry after 215.442539ms: waiting for machine to come up
	I0408 12:47:41.154393  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154940  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.154852  435248 retry.go:31] will retry after 274.982374ms: waiting for machine to come up
	I0408 12:47:41.431442  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.431990  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.432023  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.431933  435248 retry.go:31] will retry after 335.077282ms: waiting for machine to come up
	I0408 12:47:40.620537  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:42.622241  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:44.118493  433674 pod_ready.go:92] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.118532  433674 pod_ready.go:81] duration metric: took 9.508474788s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.118545  433674 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626843  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.626869  433674 pod_ready.go:81] duration metric: took 508.318376ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626882  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633488  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.633521  433674 pod_ready.go:81] duration metric: took 6.630145ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633535  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027744  433557 pod_ready.go:92] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.027771  433557 pod_ready.go:81] duration metric: took 3.007695895s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027782  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034038  433557 pod_ready.go:92] pod "kube-apiserver-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.034076  433557 pod_ready.go:81] duration metric: took 6.28617ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034090  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039232  433557 pod_ready.go:92] pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.039262  433557 pod_ready.go:81] duration metric: took 5.161613ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039277  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045793  433557 pod_ready.go:92] pod "kube-proxy-tr6td" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.045887  433557 pod_ready.go:81] duration metric: took 6.600896ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045908  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.209976  433557 pod_ready.go:92] pod "kube-scheduler-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.210003  433557 pod_ready.go:81] duration metric: took 164.085848ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.210018  433557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:43.220338  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:45.718170  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:43.224219  433881 crio.go:462] duration metric: took 1.943671791s to copy over tarball
	I0408 12:47:43.224306  433881 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:41.768734  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769194  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.769131  435248 retry.go:31] will retry after 581.590127ms: waiting for machine to come up
	I0408 12:47:42.352156  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.352975  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.353017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:42.352850  435248 retry.go:31] will retry after 673.545679ms: waiting for machine to come up
	I0408 12:47:43.028329  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029066  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029101  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.028956  435248 retry.go:31] will retry after 690.795418ms: waiting for machine to come up
	I0408 12:47:43.721435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.721999  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.722025  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.721948  435248 retry.go:31] will retry after 941.917321ms: waiting for machine to come up
	I0408 12:47:44.665002  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665468  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665495  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:44.665406  435248 retry.go:31] will retry after 1.037587737s: waiting for machine to come up
	I0408 12:47:45.705319  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705792  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705822  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:45.705730  435248 retry.go:31] will retry after 1.287151334s: waiting for machine to come up
	I0408 12:47:46.640995  433674 pod_ready.go:102] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:48.558627  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.558666  433674 pod_ready.go:81] duration metric: took 3.925119514s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.558683  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583378  433674 pod_ready.go:92] pod "kube-proxy-2gn8m" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.583405  433674 pod_ready.go:81] duration metric: took 24.715384ms for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583416  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598937  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.598969  433674 pod_ready.go:81] duration metric: took 15.544342ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598983  433674 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:47.918307  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:50.219513  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:46.621677  433881 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.397321627s)
	I0408 12:47:46.881725  433881 crio.go:469] duration metric: took 3.657463869s to extract the tarball
	I0408 12:47:46.881748  433881 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:46.936087  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:46.980999  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:46.981031  433881 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:46.981086  433881 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.981115  433881 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.981160  433881 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:46.981180  433881 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.981197  433881 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.981206  433881 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.981332  433881 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.981525  433881 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.983461  433881 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983449  433881 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.983481  433881 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.983501  433881 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.983517  433881 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.983495  433881 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.215815  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.218682  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.218812  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:47:47.226057  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.237986  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.249572  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.255059  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.331367  433881 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:47:47.331429  433881 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.331484  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.403757  433881 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:47:47.403846  433881 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.403899  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.408643  433881 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:47:47.408702  433881 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:47:47.408755  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443551  433881 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:47:47.443589  433881 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:47:47.443609  433881 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.443626  433881 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.443678  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443682  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453637  433881 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:47:47.453695  433881 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.453749  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453825  433881 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:47:47.453864  433881 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.453884  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.453908  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453990  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.454014  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:47:47.456910  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.457446  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.569243  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:47:47.569295  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.569320  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.583668  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:47:47.583967  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:47:47.589545  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:47:47.589707  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:47:47.638036  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:47:47.639955  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:47:47.860567  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:48.010273  433881 cache_images.go:92] duration metric: took 1.029223281s to LoadCachedImages
	W0408 12:47:48.010419  433881 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0408 12:47:48.010440  433881 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.20.0 crio true true} ...
	I0408 12:47:48.010631  433881 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-384148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:48.010729  433881 ssh_runner.go:195] Run: crio config
	I0408 12:47:48.065431  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:47:48.065461  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:48.065478  433881 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:48.065504  433881 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-384148 NodeName:old-k8s-version-384148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:47:48.065684  433881 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-384148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:48.065779  433881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:47:48.080840  433881 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:48.080950  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:48.094581  433881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 12:47:48.117392  433881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:48.138262  433881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:47:48.165039  433881 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:48.171191  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:48.189417  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:48.341553  433881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:48.363215  433881 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148 for IP: 192.168.39.245
	I0408 12:47:48.363249  433881 certs.go:194] generating shared ca certs ...
	I0408 12:47:48.363272  433881 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:48.363473  433881 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:48.363571  433881 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:48.363589  433881 certs.go:256] generating profile certs ...
	I0408 12:47:48.426881  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key
	I0408 12:47:48.427040  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1
	I0408 12:47:48.427110  433881 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key
	I0408 12:47:48.427261  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:48.427310  433881 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:48.427321  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:48.427354  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:48.427422  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:48.427462  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:48.427523  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:48.428524  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:48.476520  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:48.522452  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:48.561710  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:48.607052  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 12:47:48.651541  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:48.704207  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:48.742684  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:48.772703  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:48.803476  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:48.833154  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:48.863183  433881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:48.885940  433881 ssh_runner.go:195] Run: openssl version
	I0408 12:47:48.894847  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:48.910969  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916386  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916449  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.923008  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:48.936122  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:48.952344  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957735  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957815  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.964720  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:48.978862  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:48.993113  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998835  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998906  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:49.005710  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:49.019197  433881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:49.024728  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:49.031831  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:49.038736  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:49.045946  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:49.053040  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:49.060064  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:49.066969  433881 kubeadm.go:391] StartCluster: {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:49.067090  433881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:49.067156  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.107266  433881 cri.go:89] found id: ""
	I0408 12:47:49.107336  433881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:49.120092  433881 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:49.120126  433881 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:49.120132  433881 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:49.120190  433881 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:49.133500  433881 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:49.134686  433881 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:49.135619  433881 kubeconfig.go:62] /home/jenkins/minikube-integration/18588-368424/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-384148" cluster setting kubeconfig missing "old-k8s-version-384148" context setting]
	I0408 12:47:49.136897  433881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:49.139048  433881 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:49.154878  433881 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.245
	I0408 12:47:49.154925  433881 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:49.154941  433881 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:49.155009  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.207364  433881 cri.go:89] found id: ""
	I0408 12:47:49.207445  433881 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:49.228390  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:49.245160  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:49.245193  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:49.245266  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:49.256832  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:49.256913  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:49.268773  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:49.282821  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:49.282898  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:49.297896  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.312075  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:49.312158  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.327398  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:49.341467  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:49.341604  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:49.354096  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:49.366717  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:49.514951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.442724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.716276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.833506  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.927655  433881 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:50.927798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:51.428588  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:46.994162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994640  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994672  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:46.994593  435248 retry.go:31] will retry after 1.863771905s: waiting for machine to come up
	I0408 12:47:48.860673  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861257  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:48.861151  435248 retry.go:31] will retry after 2.204894376s: waiting for machine to come up
	I0408 12:47:51.067423  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067909  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067937  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:51.067864  435248 retry.go:31] will retry after 2.625423179s: waiting for machine to come up
	I0408 12:47:50.608007  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:53.108084  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:52.717545  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:55.218944  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:51.928035  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.427844  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.928718  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.927869  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.428707  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.928798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.427884  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.928273  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:56.427941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.695295  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695826  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695862  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:53.695772  435248 retry.go:31] will retry after 4.111917473s: waiting for machine to come up
	I0408 12:47:55.606909  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:58.111708  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:57.717559  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:59.718066  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:56.927927  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.428068  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.928800  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.427871  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.927822  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.428740  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.927924  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.427948  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.928792  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:01.428657  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.809179  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809697  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809729  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:57.809632  435248 retry.go:31] will retry after 4.27502806s: waiting for machine to come up
	I0408 12:48:02.086033  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086558  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has current primary IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086586  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Found IP for machine: 192.168.50.7
	I0408 12:48:02.086603  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserving static IP address...
	I0408 12:48:02.087069  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.087105  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserved static IP address: 192.168.50.7
	I0408 12:48:02.087137  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | skip adding static IP to network mk-default-k8s-diff-port-527454 - found existing host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"}
	I0408 12:48:02.087158  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Getting to WaitForSSH function...
	I0408 12:48:02.087177  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for SSH to be available...
	I0408 12:48:02.089228  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089585  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.089608  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH client type: external
	I0408 12:48:02.089840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa (-rw-------)
	I0408 12:48:02.089885  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:48:02.089900  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | About to run SSH command:
	I0408 12:48:02.089917  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | exit 0
	I0408 12:48:02.216245  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | SSH cmd err, output: <nil>: 
	I0408 12:48:02.216684  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetConfigRaw
	I0408 12:48:02.217582  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.220543  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.220961  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.220995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.221282  433439 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/config.json ...
	I0408 12:48:02.221480  433439 machine.go:94] provisionDockerMachine start ...
	I0408 12:48:02.221499  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:02.221738  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.224371  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.224770  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.224802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.225007  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.225236  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225399  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225548  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.225740  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.225957  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.225970  433439 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:48:02.336716  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:48:02.336754  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337074  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:48:02.337108  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337351  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.340133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340539  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.340583  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340653  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.340842  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341016  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341171  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.341346  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.341539  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.341556  433439 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-527454 && echo "default-k8s-diff-port-527454" | sudo tee /etc/hostname
	I0408 12:48:02.464462  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-527454
	
	I0408 12:48:02.464507  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.467682  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468082  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.468118  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468335  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.468595  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468782  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468954  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.469154  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.469372  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.469392  433439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-527454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-527454/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-527454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:48:02.593971  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:48:02.594006  433439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:48:02.594061  433439 buildroot.go:174] setting up certificates
	I0408 12:48:02.594078  433439 provision.go:84] configureAuth start
	I0408 12:48:02.594092  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.594431  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.597587  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.598043  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.600898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601267  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.601299  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601497  433439 provision.go:143] copyHostCerts
	I0408 12:48:02.601562  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:48:02.601588  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:48:02.601653  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:48:02.601841  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:48:02.601857  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:48:02.601888  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:48:02.601966  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:48:02.601981  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:48:02.602010  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:48:02.602088  433439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-527454 san=[127.0.0.1 192.168.50.7 default-k8s-diff-port-527454 localhost minikube]
	I0408 12:48:02.845116  433439 provision.go:177] copyRemoteCerts
	I0408 12:48:02.845190  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:48:02.845217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.848054  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.848406  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848559  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.848817  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.848986  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.849125  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:02.934223  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:48:02.962726  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0408 12:48:02.992767  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:48:03.021973  433439 provision.go:87] duration metric: took 427.87874ms to configureAuth
	I0408 12:48:03.022009  433439 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:48:03.022270  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:48:03.022382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.025407  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025765  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.025802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025959  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.026215  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026379  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026510  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.026659  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.026834  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.026856  433439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:48:03.310263  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:48:03.310307  433439 machine.go:97] duration metric: took 1.088813603s to provisionDockerMachine
	I0408 12:48:03.310323  433439 start.go:293] postStartSetup for "default-k8s-diff-port-527454" (driver="kvm2")
	I0408 12:48:03.310337  433439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:48:03.310362  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.310758  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:48:03.310799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.313533  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.313968  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.314001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.314201  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.314375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.314584  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.314760  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.400087  433439 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:48:03.405240  433439 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:48:03.405272  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:48:03.405351  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:48:03.405450  433439 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:48:03.405570  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:48:03.415947  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:03.448935  433439 start.go:296] duration metric: took 138.593583ms for postStartSetup
	I0408 12:48:03.449025  433439 fix.go:56] duration metric: took 23.946779964s for fixHost
	I0408 12:48:03.449055  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.452026  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452392  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.452435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452630  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.452844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453063  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453248  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.453420  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.453604  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.453615  433439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:48:03.565710  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580483.551031252
	
	I0408 12:48:03.565738  433439 fix.go:216] guest clock: 1712580483.551031252
	I0408 12:48:03.565750  433439 fix.go:229] Guest: 2024-04-08 12:48:03.551031252 +0000 UTC Remote: 2024-04-08 12:48:03.44903588 +0000 UTC m=+361.760256784 (delta=101.995372ms)
	I0408 12:48:03.565777  433439 fix.go:200] guest clock delta is within tolerance: 101.995372ms
	I0408 12:48:03.565787  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 24.063582343s
	I0408 12:48:03.565806  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.566106  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:03.569409  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.569776  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.569814  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.570017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570577  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570831  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570952  433439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:48:03.571021  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.571121  433439 ssh_runner.go:195] Run: cat /version.json
	I0408 12:48:03.571146  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.573939  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574167  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574300  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574333  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574469  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574594  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574621  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574674  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.574757  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574871  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.574957  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.575130  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.575441  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.575590  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.695930  433439 ssh_runner.go:195] Run: systemctl --version
	I0408 12:48:03.702915  433439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:48:03.853737  433439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:48:03.860218  433439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:48:03.860287  433439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:48:03.877827  433439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:48:03.877861  433439 start.go:494] detecting cgroup driver to use...
	I0408 12:48:03.877943  433439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:48:03.897232  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:48:03.913028  433439 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:48:03.913112  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:48:03.929574  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:48:03.946880  433439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:48:04.083524  433439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:48:04.243842  433439 docker.go:233] disabling docker service ...
	I0408 12:48:04.243938  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:48:04.260459  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:48:04.276119  433439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:48:04.428999  433439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:48:04.571431  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:48:04.589661  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:48:04.612872  433439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:48:04.612954  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.625841  433439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:48:04.625939  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.638868  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.652106  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.664883  433439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:48:04.678149  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.691069  433439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.711329  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.725917  433439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:48:04.738875  433439 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:48:04.738941  433439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:48:04.756784  433439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:48:04.769852  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:04.895658  433439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:48:05.056165  433439 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:48:05.056270  433439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:48:05.061838  433439 start.go:562] Will wait 60s for crictl version
	I0408 12:48:05.061918  433439 ssh_runner.go:195] Run: which crictl
	I0408 12:48:05.066280  433439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:48:05.110966  433439 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:48:05.111084  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.142272  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.176138  433439 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:48:00.606508  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:03.107018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:05.109926  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:02.220836  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:04.718465  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:01.928628  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.427857  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.927917  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.428824  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.428084  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.928751  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.428193  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.927854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.427836  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.177382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:05.180028  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180334  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:05.180363  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180635  433439 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 12:48:05.185436  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:05.199001  433439 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:48:05.199130  433439 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:48:05.199174  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:05.239255  433439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:48:05.239358  433439 ssh_runner.go:195] Run: which lz4
	I0408 12:48:05.244115  433439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:48:05.249135  433439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:48:05.249169  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:48:07.606284  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.607161  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.720025  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.219059  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.928222  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.427868  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.927863  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.428510  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.928662  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.427932  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.928613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.928934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.428085  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.889921  433439 crio.go:462] duration metric: took 1.645848876s to copy over tarball
	I0408 12:48:06.890006  433439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:48:09.403589  433439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513555281s)
	I0408 12:48:09.403620  433439 crio.go:469] duration metric: took 2.513669951s to extract the tarball
	I0408 12:48:09.403627  433439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:48:09.446487  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:09.494576  433439 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:48:09.494606  433439 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:48:09.494614  433439 kubeadm.go:928] updating node { 192.168.50.7 8444 v1.29.3 crio true true} ...
	I0408 12:48:09.494822  433439 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-527454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:48:09.494917  433439 ssh_runner.go:195] Run: crio config
	I0408 12:48:09.541809  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:09.541839  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:09.541859  433439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:48:09.541887  433439 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-527454 NodeName:default-k8s-diff-port-527454 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:48:09.542105  433439 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-527454"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:48:09.542201  433439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:48:09.553494  433439 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:48:09.553591  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:48:09.564970  433439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0408 12:48:09.584888  433439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:48:09.604538  433439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0408 12:48:09.623993  433439 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0408 12:48:09.628368  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:09.642170  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:09.789791  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:48:09.808943  433439 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454 for IP: 192.168.50.7
	I0408 12:48:09.808972  433439 certs.go:194] generating shared ca certs ...
	I0408 12:48:09.808995  433439 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:48:09.809194  433439 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:48:09.809242  433439 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:48:09.809253  433439 certs.go:256] generating profile certs ...
	I0408 12:48:09.809344  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/client.key
	I0408 12:48:09.809415  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key.ad1d04eb
	I0408 12:48:09.809457  433439 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key
	I0408 12:48:09.809645  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:48:09.809699  433439 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:48:09.809713  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:48:09.809742  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:48:09.809764  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:48:09.809792  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:48:09.809851  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:09.810516  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:48:09.866085  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:48:09.899718  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:48:09.941704  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:48:09.976180  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0408 12:48:10.014420  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:48:10.044380  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:48:10.072034  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:48:10.099417  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:48:10.126143  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:48:10.154244  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:48:10.183954  433439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:48:10.207277  433439 ssh_runner.go:195] Run: openssl version
	I0408 12:48:10.213691  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:48:10.228406  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233736  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233798  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.240236  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:48:10.253382  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:48:10.267783  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273234  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273318  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.279925  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:48:10.292710  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:48:10.305381  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310629  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310703  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.317063  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:48:10.330320  433439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:48:10.336138  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:48:10.343341  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:48:10.350536  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:48:10.357665  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:48:10.364925  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:48:10.372314  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:48:10.380001  433439 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:48:10.380107  433439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:48:10.380174  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.425378  433439 cri.go:89] found id: ""
	I0408 12:48:10.425475  433439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:48:10.438972  433439 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:48:10.439000  433439 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:48:10.439005  433439 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:48:10.439051  433439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:48:10.452072  433439 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:48:10.453410  433439 kubeconfig.go:125] found "default-k8s-diff-port-527454" server: "https://192.168.50.7:8444"
	I0408 12:48:10.456022  433439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:48:10.469116  433439 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0408 12:48:10.469171  433439 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:48:10.469188  433439 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:48:10.469256  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.517874  433439 cri.go:89] found id: ""
	I0408 12:48:10.517969  433439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:48:10.538088  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:48:10.551560  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:48:10.551580  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:48:10.551636  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:48:10.564123  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:48:10.564209  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:48:10.578691  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:48:10.590692  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:48:10.590765  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:48:10.602902  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.616831  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:48:10.616922  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.629213  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:48:10.641625  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:48:10.641709  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:48:10.653162  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:48:10.665261  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:10.811712  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.107002  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.606976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:12.188805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.221750  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:11.928656  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.427975  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.927923  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.428494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.928608  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.427852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.927874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.427855  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:16.427929  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.901885  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.09013292s)
	I0408 12:48:11.975836  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.237051  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.329550  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.460345  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:48:12.460457  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.961443  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.460681  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.520828  433439 api_server.go:72] duration metric: took 1.060470201s to wait for apiserver process to appear ...
	I0408 12:48:13.520866  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:48:13.520899  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:13.521407  433439 api_server.go:269] stopped: https://192.168.50.7:8444/healthz: Get "https://192.168.50.7:8444/healthz": dial tcp 192.168.50.7:8444: connect: connection refused
	I0408 12:48:14.022007  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.564485  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.564526  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:16.564543  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.617870  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.617904  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:17.021290  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.026545  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.026578  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:17.521124  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.529552  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.529596  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:18.021125  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:18.037000  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:48:18.049656  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:48:18.049699  433439 api_server.go:131] duration metric: took 4.528823991s to wait for apiserver health ...
	I0408 12:48:18.049722  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:18.049730  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:18.051495  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:48:16.607222  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:18.607837  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.717612  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:19.217050  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.928269  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.427867  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.428658  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.928649  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.428746  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.928734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.427874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.927842  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:21.427823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.052916  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:48:18.072115  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:48:18.111408  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:48:18.130585  433439 system_pods.go:59] 8 kube-system pods found
	I0408 12:48:18.130629  433439 system_pods.go:61] "coredns-76f75df574-r99kj" [171e271b-eec6-4238-afb1-82a2f228c225] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:48:18.130641  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [7019f1eb-58ef-4b1f-acf3-ed3c1ed84623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:48:18.130651  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [80ccd16d-d883-4c92-bb13-abe2d412532c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:48:18.130661  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [78d513aa-1f24-42c0-bfb9-4c20fdee63f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:48:18.130669  433439 system_pods.go:61] "kube-proxy-ztmmc" [de09a26e-cd95-401a-b575-977fcd660c47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 12:48:18.130683  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [eac4d549-1763-45b8-be11-b3b9e83f5110] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:48:18.130702  433439 system_pods.go:61] "metrics-server-57f55c9bc5-44qbm" [52631fc6-84d0-443b-ba42-de35a65db0f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:48:18.130714  433439 system_pods.go:61] "storage-provisioner" [82e8b0d0-6c22-4644-8bd1-b48887b0fe82] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 12:48:18.130730  433439 system_pods.go:74] duration metric: took 19.293309ms to wait for pod list to return data ...
	I0408 12:48:18.130745  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:48:18.135625  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:48:18.135663  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:48:18.135679  433439 node_conditions.go:105] duration metric: took 4.924641ms to run NodePressure ...
	I0408 12:48:18.135724  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:18.416272  433439 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424302  433439 kubeadm.go:733] kubelet initialised
	I0408 12:48:18.424325  433439 kubeadm.go:734] duration metric: took 8.015642ms waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424342  433439 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:48:18.436706  433439 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.447063  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447102  433439 pod_ready.go:81] duration metric: took 10.361708ms for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.447116  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447126  433439 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.460464  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460496  433439 pod_ready.go:81] duration metric: took 13.357612ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.460513  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460523  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.469991  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470035  433439 pod_ready.go:81] duration metric: took 9.502493ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.470072  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470083  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.516886  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516920  433439 pod_ready.go:81] duration metric: took 46.823396ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.516933  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516940  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915101  433439 pod_ready.go:92] pod "kube-proxy-ztmmc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:18.915131  433439 pod_ready.go:81] duration metric: took 398.182437ms for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915144  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:20.922456  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.107083  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.108249  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.219995  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.718091  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.928654  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.428887  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.928103  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.428482  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.928236  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.428613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.928054  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.428566  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.927852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.428729  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.922607  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:24.922155  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:24.922185  433439 pod_ready.go:81] duration metric: took 6.007031338s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:24.922200  433439 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:25.607653  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.216429  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.218553  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.717516  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.427853  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.928281  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.428354  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.928419  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.427934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.427840  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.931412  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:29.430930  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.608369  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:33.107424  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:32.717551  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.216256  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:31.928618  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.928067  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.428776  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.928583  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.428774  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.928033  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.428825  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.928696  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.428311  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.931958  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:34.430950  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.607018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.607820  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:40.106361  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.217721  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:39.218016  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:36.928915  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.427831  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.928429  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.428001  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.927802  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.427845  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.928013  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:41.428569  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.929987  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:38.931900  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.429986  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:42.605609  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:44.606744  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.717196  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:43.718405  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.428794  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.927856  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.428217  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.928796  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.428756  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.927829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.428563  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.927812  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:46.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.430411  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:45.932993  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.607058  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.607716  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.216568  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.218325  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.718153  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.928607  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.427829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.928499  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.428241  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.928393  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.428488  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.927941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.428003  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.928815  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:50.928888  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:50.970680  433881 cri.go:89] found id: ""
	I0408 12:48:50.970713  433881 logs.go:276] 0 containers: []
	W0408 12:48:50.970725  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:50.970733  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:50.970799  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:51.009804  433881 cri.go:89] found id: ""
	I0408 12:48:51.009838  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.009848  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:51.009854  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:51.009909  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:51.049581  433881 cri.go:89] found id: ""
	I0408 12:48:51.049617  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.049626  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:51.049633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:51.049706  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:51.086286  433881 cri.go:89] found id: ""
	I0408 12:48:51.086314  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.086323  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:51.086329  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:51.086395  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:51.126888  433881 cri.go:89] found id: ""
	I0408 12:48:51.126916  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.126927  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:51.126935  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:51.126998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:51.168650  433881 cri.go:89] found id: ""
	I0408 12:48:51.168684  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.168695  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:51.168702  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:51.168759  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:51.205661  433881 cri.go:89] found id: ""
	I0408 12:48:51.205693  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.205706  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:51.205714  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:51.205782  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:51.245659  433881 cri.go:89] found id: ""
	I0408 12:48:51.245699  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.245711  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:51.245725  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:51.245742  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:51.310079  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:51.310120  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:51.354093  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:51.354124  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:51.405031  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:51.405074  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:51.421147  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:51.421183  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:51.547658  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:48.430488  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.432250  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:51.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.606447  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.217434  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:55.717265  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.047880  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:54.062872  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:54.062960  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:54.109041  433881 cri.go:89] found id: ""
	I0408 12:48:54.109068  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.109079  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:54.109087  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:54.109209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:54.150194  433881 cri.go:89] found id: ""
	I0408 12:48:54.150223  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.150231  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:54.150237  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:54.150292  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:54.191735  433881 cri.go:89] found id: ""
	I0408 12:48:54.191767  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.191785  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:54.191792  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:54.191872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:54.251766  433881 cri.go:89] found id: ""
	I0408 12:48:54.251798  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.251807  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:54.251813  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:54.251878  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:54.292179  433881 cri.go:89] found id: ""
	I0408 12:48:54.292215  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.292229  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:54.292237  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:54.292311  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:54.329338  433881 cri.go:89] found id: ""
	I0408 12:48:54.329368  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.329380  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:54.329389  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:54.329458  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:54.377094  433881 cri.go:89] found id: ""
	I0408 12:48:54.377132  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.377144  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:54.377153  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:54.377227  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:54.415835  433881 cri.go:89] found id: ""
	I0408 12:48:54.415865  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.415873  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:54.415884  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:54.415896  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:54.471985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:54.472040  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:54.487674  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:54.487727  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:54.575138  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:54.575161  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:54.575176  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:54.647315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:54.647364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:52.928902  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.931253  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:56.106505  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.108187  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.218754  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.718600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:57.189969  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:57.204122  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:57.204201  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:57.241210  433881 cri.go:89] found id: ""
	I0408 12:48:57.241243  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.241252  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:57.241258  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:57.241310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:57.279553  433881 cri.go:89] found id: ""
	I0408 12:48:57.279591  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.279600  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:57.279606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:57.279658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:57.323516  433881 cri.go:89] found id: ""
	I0408 12:48:57.323560  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.323585  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:57.323593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:57.323663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:57.363723  433881 cri.go:89] found id: ""
	I0408 12:48:57.363755  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.363766  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:57.363772  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:57.363839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:57.400144  433881 cri.go:89] found id: ""
	I0408 12:48:57.400178  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.400190  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:57.400208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:57.400274  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:57.441875  433881 cri.go:89] found id: ""
	I0408 12:48:57.441907  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.441919  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:57.441928  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:57.441999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:57.478024  433881 cri.go:89] found id: ""
	I0408 12:48:57.478057  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.478066  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:57.478074  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:57.478144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:57.516602  433881 cri.go:89] found id: ""
	I0408 12:48:57.516633  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.516642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:57.516652  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:57.516666  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:57.573832  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:57.573883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:57.590751  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:57.590793  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:57.670650  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:57.670679  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:57.670698  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:57.746440  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:57.746488  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:00.291359  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:00.306024  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:00.306116  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:00.352262  433881 cri.go:89] found id: ""
	I0408 12:49:00.352294  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.352305  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:00.352314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:00.352390  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:00.392371  433881 cri.go:89] found id: ""
	I0408 12:49:00.392403  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.392415  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:00.392423  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:00.392488  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:00.434848  433881 cri.go:89] found id: ""
	I0408 12:49:00.434876  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.434885  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:00.434892  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:00.434951  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:00.476998  433881 cri.go:89] found id: ""
	I0408 12:49:00.477032  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.477045  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:00.477054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:00.477128  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:00.514520  433881 cri.go:89] found id: ""
	I0408 12:49:00.514560  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.514569  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:00.514575  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:00.514643  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:00.555942  433881 cri.go:89] found id: ""
	I0408 12:49:00.555981  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.555996  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:00.556005  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:00.556074  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:00.603944  433881 cri.go:89] found id: ""
	I0408 12:49:00.604053  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.604079  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:00.604097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:00.604193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:00.660591  433881 cri.go:89] found id: ""
	I0408 12:49:00.660628  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.660642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:00.660655  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:00.660677  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:00.731774  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:00.731821  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:00.747891  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:00.747947  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:00.827051  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:00.827085  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:00.827100  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:00.907231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:00.907280  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:57.431032  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:59.930470  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.608450  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.106647  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.218064  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.460014  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:03.474615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:03.474716  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:03.513072  433881 cri.go:89] found id: ""
	I0408 12:49:03.513106  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.513115  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:03.513122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:03.513179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:03.549307  433881 cri.go:89] found id: ""
	I0408 12:49:03.549349  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.549358  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:03.549364  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:03.549508  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:03.587463  433881 cri.go:89] found id: ""
	I0408 12:49:03.587503  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.587516  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:03.587524  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:03.587601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:03.628171  433881 cri.go:89] found id: ""
	I0408 12:49:03.628202  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.628211  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:03.628217  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:03.628284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:03.663630  433881 cri.go:89] found id: ""
	I0408 12:49:03.663661  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.663672  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:03.663680  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:03.663762  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:03.704078  433881 cri.go:89] found id: ""
	I0408 12:49:03.704112  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.704124  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:03.704134  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:03.704202  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:03.744820  433881 cri.go:89] found id: ""
	I0408 12:49:03.744856  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.744868  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:03.744877  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:03.744945  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:03.785826  433881 cri.go:89] found id: ""
	I0408 12:49:03.785855  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.785868  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:03.785878  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:03.785905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:03.800987  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:03.801019  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:03.882870  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:03.882905  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:03.882924  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:03.967335  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:03.967382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:04.008319  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:04.008348  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:06.562156  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:06.579058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:06.579137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:01.933210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:04.428894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.428974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.606895  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:08.106819  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:07.718023  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.635302  433881 cri.go:89] found id: ""
	I0408 12:49:06.635333  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.635345  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:06.635353  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:06.635422  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:06.696626  433881 cri.go:89] found id: ""
	I0408 12:49:06.696675  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.696692  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:06.696700  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:06.696769  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:06.738555  433881 cri.go:89] found id: ""
	I0408 12:49:06.738589  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.738601  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:06.738610  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:06.738675  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:06.780471  433881 cri.go:89] found id: ""
	I0408 12:49:06.780507  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.780516  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:06.780522  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:06.780587  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:06.823514  433881 cri.go:89] found id: ""
	I0408 12:49:06.823558  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.823571  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:06.823580  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:06.823671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:06.863990  433881 cri.go:89] found id: ""
	I0408 12:49:06.864029  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.864045  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:06.864055  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:06.864123  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:06.905383  433881 cri.go:89] found id: ""
	I0408 12:49:06.905419  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.905432  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:06.905440  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:06.905510  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:06.947761  433881 cri.go:89] found id: ""
	I0408 12:49:06.947792  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.947805  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:06.947814  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:06.947826  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:06.988895  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:06.988930  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:07.043205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:07.043251  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:07.057788  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:07.057823  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:07.137854  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:07.137884  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:07.137903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:09.724678  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:09.739337  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:09.739408  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:09.777803  433881 cri.go:89] found id: ""
	I0408 12:49:09.777837  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.777848  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:09.777857  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:09.777934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:09.818101  433881 cri.go:89] found id: ""
	I0408 12:49:09.818132  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.818144  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:09.818152  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:09.818220  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:09.860148  433881 cri.go:89] found id: ""
	I0408 12:49:09.860186  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.860211  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:09.860218  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:09.860284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:09.899008  433881 cri.go:89] found id: ""
	I0408 12:49:09.899042  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.899054  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:09.899063  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:09.899130  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:09.938235  433881 cri.go:89] found id: ""
	I0408 12:49:09.938270  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.938281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:09.938290  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:09.938361  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:09.977404  433881 cri.go:89] found id: ""
	I0408 12:49:09.977438  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.977447  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:09.977454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:09.977505  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:10.015959  433881 cri.go:89] found id: ""
	I0408 12:49:10.015992  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.016008  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:10.016015  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:10.016083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:10.055723  433881 cri.go:89] found id: ""
	I0408 12:49:10.055753  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.055762  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:10.055771  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:10.055785  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:10.131028  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:10.131061  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:10.131079  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:10.213484  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:10.213528  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:10.261403  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:10.261554  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:10.316130  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:10.316189  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:08.429894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.930925  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.609607  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:13.106296  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.716182  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.717779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.832344  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:12.846324  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:12.846446  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:12.883721  433881 cri.go:89] found id: ""
	I0408 12:49:12.883761  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.883776  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:12.883784  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:12.883850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:12.922869  433881 cri.go:89] found id: ""
	I0408 12:49:12.922903  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.922914  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:12.922923  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:12.922989  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:12.965672  433881 cri.go:89] found id: ""
	I0408 12:49:12.965711  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.965723  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:12.965731  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:12.965804  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:13.005430  433881 cri.go:89] found id: ""
	I0408 12:49:13.005466  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.005479  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:13.005494  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:13.005556  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:13.047068  433881 cri.go:89] found id: ""
	I0408 12:49:13.047095  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.047103  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:13.047110  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:13.047175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:13.085014  433881 cri.go:89] found id: ""
	I0408 12:49:13.085047  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.085058  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:13.085067  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:13.085134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:13.122582  433881 cri.go:89] found id: ""
	I0408 12:49:13.122621  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.122633  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:13.122643  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:13.122707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:13.159159  433881 cri.go:89] found id: ""
	I0408 12:49:13.159190  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.159199  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:13.159209  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:13.159221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:13.211508  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:13.211553  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:13.228228  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:13.228265  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:13.306379  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:13.306419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:13.306437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:13.383403  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:13.383462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:15.933673  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:15.947963  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:15.948039  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:15.988497  433881 cri.go:89] found id: ""
	I0408 12:49:15.988526  433881 logs.go:276] 0 containers: []
	W0408 12:49:15.988534  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:15.988541  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:15.988605  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:16.026695  433881 cri.go:89] found id: ""
	I0408 12:49:16.026733  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.026758  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:16.026766  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:16.026850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:16.072415  433881 cri.go:89] found id: ""
	I0408 12:49:16.072452  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.072487  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:16.072498  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:16.072576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:16.111534  433881 cri.go:89] found id: ""
	I0408 12:49:16.111563  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.111575  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:16.111583  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:16.111653  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:16.151515  433881 cri.go:89] found id: ""
	I0408 12:49:16.151550  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.151562  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:16.151572  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:16.151640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:16.189055  433881 cri.go:89] found id: ""
	I0408 12:49:16.189085  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.189094  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:16.189101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:16.189153  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:16.226759  433881 cri.go:89] found id: ""
	I0408 12:49:16.226790  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.226800  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:16.226807  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:16.226860  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:16.269035  433881 cri.go:89] found id: ""
	I0408 12:49:16.269068  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.269079  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:16.269092  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:16.269110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:16.322426  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:16.322472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:16.337670  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:16.337704  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:16.422746  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:16.422777  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:16.422795  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:16.508089  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:16.508140  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:12.931911  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.933011  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:15.607174  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:18.106346  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:17.216822  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.216874  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.055162  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:19.069970  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:19.070044  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:19.110031  433881 cri.go:89] found id: ""
	I0408 12:49:19.110062  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.110070  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:19.110077  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:19.110125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:19.150644  433881 cri.go:89] found id: ""
	I0408 12:49:19.150681  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.150693  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:19.150702  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:19.150770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:19.193032  433881 cri.go:89] found id: ""
	I0408 12:49:19.193064  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.193076  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:19.193084  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:19.193157  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:19.230634  433881 cri.go:89] found id: ""
	I0408 12:49:19.230661  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.230670  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:19.230676  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:19.230727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:19.269083  433881 cri.go:89] found id: ""
	I0408 12:49:19.269116  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.269125  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:19.269132  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:19.269183  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:19.309072  433881 cri.go:89] found id: ""
	I0408 12:49:19.309105  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.309117  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:19.309126  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:19.309208  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:19.349582  433881 cri.go:89] found id: ""
	I0408 12:49:19.349613  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.349622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:19.349633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:19.349687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:19.388015  433881 cri.go:89] found id: ""
	I0408 12:49:19.388046  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.388053  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:19.388062  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:19.388076  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:19.469726  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:19.469750  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:19.469766  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:19.551098  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:19.551138  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:19.595343  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:19.595377  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:19.655983  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:19.656031  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:17.429653  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.432135  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:20.609415  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.105576  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:25.106666  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:21.217932  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.720613  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:22.172109  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:22.187123  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:22.187197  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:22.227242  433881 cri.go:89] found id: ""
	I0408 12:49:22.227269  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.227277  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:22.227283  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:22.227344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:22.266238  433881 cri.go:89] found id: ""
	I0408 12:49:22.266270  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.266279  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:22.266285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:22.266345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:22.304245  433881 cri.go:89] found id: ""
	I0408 12:49:22.304273  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.304281  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:22.304288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:22.304344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:22.348994  433881 cri.go:89] found id: ""
	I0408 12:49:22.349035  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.349048  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:22.349058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:22.349134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:22.389590  433881 cri.go:89] found id: ""
	I0408 12:49:22.389622  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.389631  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:22.389638  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:22.389708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:22.425775  433881 cri.go:89] found id: ""
	I0408 12:49:22.425809  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.425821  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:22.425830  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:22.425898  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:22.468155  433881 cri.go:89] found id: ""
	I0408 12:49:22.468184  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.468192  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:22.468198  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:22.468250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:22.507866  433881 cri.go:89] found id: ""
	I0408 12:49:22.507906  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.507915  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:22.507934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:22.507953  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:22.559847  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:22.559893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:22.575153  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:22.575188  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:22.656324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:22.656354  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:22.656372  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:22.737542  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:22.737589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.282655  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:25.296701  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:25.296770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:25.337101  433881 cri.go:89] found id: ""
	I0408 12:49:25.337141  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.337152  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:25.337161  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:25.337228  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:25.376383  433881 cri.go:89] found id: ""
	I0408 12:49:25.376453  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.376467  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:25.376481  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:25.376576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:25.415819  433881 cri.go:89] found id: ""
	I0408 12:49:25.415852  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.415865  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:25.415873  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:25.415941  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:25.457500  433881 cri.go:89] found id: ""
	I0408 12:49:25.457549  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.457560  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:25.457568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:25.457652  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:25.497132  433881 cri.go:89] found id: ""
	I0408 12:49:25.497172  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.497185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:25.497194  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:25.497265  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:25.542721  433881 cri.go:89] found id: ""
	I0408 12:49:25.542754  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.542765  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:25.542773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:25.542842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:25.583815  433881 cri.go:89] found id: ""
	I0408 12:49:25.583858  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.583869  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:25.583876  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:25.583931  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:25.623484  433881 cri.go:89] found id: ""
	I0408 12:49:25.623519  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.623530  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:25.623544  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:25.623562  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.674250  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:25.674286  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:25.735433  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:25.735477  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:25.750760  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:25.750792  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:25.830122  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:25.830158  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:25.830192  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:21.929027  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.933879  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.429452  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:27.106798  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:29.605690  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.216525  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.216788  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.217600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.418059  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:28.434568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:28.434627  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.479914  433881 cri.go:89] found id: ""
	I0408 12:49:28.479956  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.479968  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:28.479977  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:28.480052  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:28.526249  433881 cri.go:89] found id: ""
	I0408 12:49:28.526282  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.526305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:28.526314  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:28.526403  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:28.564561  433881 cri.go:89] found id: ""
	I0408 12:49:28.564595  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.564606  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:28.564613  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:28.564666  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:28.606416  433881 cri.go:89] found id: ""
	I0408 12:49:28.606456  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.606469  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:28.606478  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:28.606545  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:28.649847  433881 cri.go:89] found id: ""
	I0408 12:49:28.649880  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.649915  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:28.649925  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:28.650014  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:28.690548  433881 cri.go:89] found id: ""
	I0408 12:49:28.690587  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.690600  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:28.690609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:28.690681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:28.730123  433881 cri.go:89] found id: ""
	I0408 12:49:28.730159  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.730170  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:28.730179  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:28.730250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:28.771147  433881 cri.go:89] found id: ""
	I0408 12:49:28.771192  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.771205  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:28.771220  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:28.771238  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:28.856250  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:28.856273  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:28.856301  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:28.941925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:28.941982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:29.003853  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:29.003893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:29.057957  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:29.058004  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.573734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:31.588485  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:31.588551  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.433974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.930607  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.606729  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.107220  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:32.218719  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.718165  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.625072  433881 cri.go:89] found id: ""
	I0408 12:49:31.625100  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.625108  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:31.625114  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:31.625175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:31.662716  433881 cri.go:89] found id: ""
	I0408 12:49:31.662752  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.662763  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:31.662772  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:31.662839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:31.701551  433881 cri.go:89] found id: ""
	I0408 12:49:31.701588  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.701596  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:31.701603  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:31.701687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:31.741857  433881 cri.go:89] found id: ""
	I0408 12:49:31.741888  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.741900  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:31.741908  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:31.741973  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:31.782209  433881 cri.go:89] found id: ""
	I0408 12:49:31.782240  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.782252  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:31.782259  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:31.782347  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:31.820207  433881 cri.go:89] found id: ""
	I0408 12:49:31.820261  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.820283  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:31.820297  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:31.820362  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:31.858445  433881 cri.go:89] found id: ""
	I0408 12:49:31.858482  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.858495  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:31.858504  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:31.858580  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:31.899017  433881 cri.go:89] found id: ""
	I0408 12:49:31.899052  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.899070  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:31.899084  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:31.899102  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:31.956200  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:31.956239  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.971940  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:31.971982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:32.049548  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:32.049578  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:32.049596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:32.136320  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:32.136366  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:34.684997  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:34.700097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:34.700185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:34.757577  433881 cri.go:89] found id: ""
	I0408 12:49:34.757669  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.757686  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:34.757696  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:34.757792  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:34.798151  433881 cri.go:89] found id: ""
	I0408 12:49:34.798188  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.798196  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:34.798203  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:34.798266  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:34.835735  433881 cri.go:89] found id: ""
	I0408 12:49:34.835774  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.835786  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:34.835794  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:34.835862  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:34.875311  433881 cri.go:89] found id: ""
	I0408 12:49:34.875345  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.875359  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:34.875368  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:34.875484  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:34.916118  433881 cri.go:89] found id: ""
	I0408 12:49:34.916148  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.916159  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:34.916167  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:34.916233  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:34.961197  433881 cri.go:89] found id: ""
	I0408 12:49:34.961234  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.961246  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:34.961254  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:34.961314  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:34.999553  433881 cri.go:89] found id: ""
	I0408 12:49:34.999590  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.999598  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:34.999606  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:34.999671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:35.038204  433881 cri.go:89] found id: ""
	I0408 12:49:35.038244  433881 logs.go:276] 0 containers: []
	W0408 12:49:35.038254  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:35.038265  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:35.038277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:35.118925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:35.118982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:35.164584  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:35.164631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:35.216654  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:35.216694  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:35.232506  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:35.232544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:35.304615  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:33.429854  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:35.933211  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:36.605433  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:38.606014  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.217818  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:39.717250  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.805529  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:37.821463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:37.821550  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:37.860644  433881 cri.go:89] found id: ""
	I0408 12:49:37.860683  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.860700  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:37.860709  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:37.860781  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:37.899995  433881 cri.go:89] found id: ""
	I0408 12:49:37.900034  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.900042  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:37.900048  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:37.900111  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:37.939562  433881 cri.go:89] found id: ""
	I0408 12:49:37.939584  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.939592  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:37.939599  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:37.939668  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:37.977990  433881 cri.go:89] found id: ""
	I0408 12:49:37.978021  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.978033  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:37.978042  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:37.978113  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:38.014506  433881 cri.go:89] found id: ""
	I0408 12:49:38.014537  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.014551  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:38.014559  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:38.014639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:38.049714  433881 cri.go:89] found id: ""
	I0408 12:49:38.049751  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.049764  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:38.049773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:38.049842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:38.089931  433881 cri.go:89] found id: ""
	I0408 12:49:38.089978  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.089987  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:38.089993  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:38.090058  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:38.127674  433881 cri.go:89] found id: ""
	I0408 12:49:38.127715  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.127727  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:38.127738  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:38.127759  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.144170  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:38.144203  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:38.225864  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:38.225885  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:38.225899  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:38.309289  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:38.309334  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:38.351666  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:38.351724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:40.910064  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:40.926264  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:40.926350  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:40.973110  433881 cri.go:89] found id: ""
	I0408 12:49:40.973138  433881 logs.go:276] 0 containers: []
	W0408 12:49:40.973146  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:40.973152  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:40.973209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:41.014643  433881 cri.go:89] found id: ""
	I0408 12:49:41.014675  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.014688  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:41.014696  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:41.014761  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:41.054414  433881 cri.go:89] found id: ""
	I0408 12:49:41.054446  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.054461  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:41.054469  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:41.054543  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:41.094835  433881 cri.go:89] found id: ""
	I0408 12:49:41.094867  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.094876  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:41.094883  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:41.094943  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:41.153654  433881 cri.go:89] found id: ""
	I0408 12:49:41.153684  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.153693  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:41.153699  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:41.153751  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:41.196170  433881 cri.go:89] found id: ""
	I0408 12:49:41.196198  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.196209  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:41.196215  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:41.196277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:41.261374  433881 cri.go:89] found id: ""
	I0408 12:49:41.261412  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.261423  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:41.261432  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:41.261500  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:41.300491  433881 cri.go:89] found id: ""
	I0408 12:49:41.300523  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.300532  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:41.300546  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:41.300559  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:41.373813  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:41.373843  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:41.373860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:41.449773  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:41.449819  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:41.498826  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:41.498862  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:41.552736  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:41.552780  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.431584  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:40.930328  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.106567  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:43.606770  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.718244  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.218855  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.068653  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:44.083655  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:44.083756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:44.124068  433881 cri.go:89] found id: ""
	I0408 12:49:44.124101  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.124113  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:44.124122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:44.124193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:44.160898  433881 cri.go:89] found id: ""
	I0408 12:49:44.160936  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.160950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:44.160958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:44.161032  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:44.196503  433881 cri.go:89] found id: ""
	I0408 12:49:44.196532  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.196540  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:44.196547  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:44.196611  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:44.234604  433881 cri.go:89] found id: ""
	I0408 12:49:44.234644  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.234656  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:44.234664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:44.234720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:44.271243  433881 cri.go:89] found id: ""
	I0408 12:49:44.271283  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.271297  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:44.271306  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:44.271369  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:44.308504  433881 cri.go:89] found id: ""
	I0408 12:49:44.308543  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.308571  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:44.308581  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:44.308644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:44.345662  433881 cri.go:89] found id: ""
	I0408 12:49:44.345703  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.345716  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:44.345725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:44.345786  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:44.384785  433881 cri.go:89] found id: ""
	I0408 12:49:44.384816  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.384826  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:44.384845  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:44.384863  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:44.429253  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:44.429283  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:44.485160  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:44.485201  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:44.502996  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:44.503033  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:44.581921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:44.581946  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:44.581964  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:43.428915  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:45.430859  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.106078  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.108320  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.718065  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.721772  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:47.167101  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:47.183406  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:47.183475  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:47.244266  433881 cri.go:89] found id: ""
	I0408 12:49:47.244295  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.244306  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:47.244314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:47.244379  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:47.285538  433881 cri.go:89] found id: ""
	I0408 12:49:47.285575  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.285588  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:47.285597  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:47.285673  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:47.323634  433881 cri.go:89] found id: ""
	I0408 12:49:47.323670  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.323679  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:47.323707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:47.323791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:47.362737  433881 cri.go:89] found id: ""
	I0408 12:49:47.362774  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.362787  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:47.362795  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:47.362856  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:47.403914  433881 cri.go:89] found id: ""
	I0408 12:49:47.403947  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.403958  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:47.403967  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:47.404035  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:47.445470  433881 cri.go:89] found id: ""
	I0408 12:49:47.445506  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.445521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:47.445530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:47.445598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:47.482633  433881 cri.go:89] found id: ""
	I0408 12:49:47.482669  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.482680  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:47.482689  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:47.482760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:47.521404  433881 cri.go:89] found id: ""
	I0408 12:49:47.521441  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.521456  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:47.521469  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:47.521486  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:47.597247  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:47.597270  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:47.597284  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:47.678765  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:47.678805  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.721463  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:47.721502  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:47.780430  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:47.780472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.295320  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:50.312212  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:50.312293  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:50.355987  433881 cri.go:89] found id: ""
	I0408 12:49:50.356022  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.356034  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:50.356043  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:50.356118  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:50.399662  433881 cri.go:89] found id: ""
	I0408 12:49:50.399714  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.399726  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:50.399735  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:50.399798  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:50.441718  433881 cri.go:89] found id: ""
	I0408 12:49:50.441753  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.441764  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:50.441773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:50.441846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:50.485588  433881 cri.go:89] found id: ""
	I0408 12:49:50.485624  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.485634  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:50.485641  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:50.485703  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:50.524897  433881 cri.go:89] found id: ""
	I0408 12:49:50.524929  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.524937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:50.524943  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:50.524998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:50.561337  433881 cri.go:89] found id: ""
	I0408 12:49:50.561378  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.561388  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:50.561396  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:50.561468  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:50.603052  433881 cri.go:89] found id: ""
	I0408 12:49:50.603082  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.603092  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:50.603101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:50.603169  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:50.643514  433881 cri.go:89] found id: ""
	I0408 12:49:50.643555  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.643566  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:50.643576  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:50.643589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:50.697346  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:50.697388  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.711982  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:50.712015  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:50.796665  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:50.796711  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:50.796731  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:50.873396  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:50.873438  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.432167  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:49.929922  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:50.606575  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.106564  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:51.217123  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.217785  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.217941  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.421458  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:53.435909  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:53.435975  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:53.478018  433881 cri.go:89] found id: ""
	I0408 12:49:53.478052  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.478063  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:53.478072  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:53.478138  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:53.518890  433881 cri.go:89] found id: ""
	I0408 12:49:53.518936  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.518950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:53.518958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:53.519047  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:53.554912  433881 cri.go:89] found id: ""
	I0408 12:49:53.554952  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.554964  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:53.554972  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:53.555042  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:53.592991  433881 cri.go:89] found id: ""
	I0408 12:49:53.593019  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.593028  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:53.593033  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:53.593088  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:53.631215  433881 cri.go:89] found id: ""
	I0408 12:49:53.631255  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.631269  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:53.631277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:53.631351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:53.669189  433881 cri.go:89] found id: ""
	I0408 12:49:53.669228  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.669248  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:53.669258  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:53.669322  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:53.709315  433881 cri.go:89] found id: ""
	I0408 12:49:53.709344  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.709353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:53.709359  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:53.709421  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:53.750869  433881 cri.go:89] found id: ""
	I0408 12:49:53.750910  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.750922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:53.750934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:53.750951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:53.802734  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:53.802782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:53.819509  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:53.819546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:53.888733  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:53.888761  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:53.888782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:53.972408  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:53.972448  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:56.517173  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:56.532357  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:56.532427  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:56.574068  433881 cri.go:89] found id: ""
	I0408 12:49:56.574109  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.574118  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:56.574129  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:56.574276  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:52.429230  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:54.929643  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.607214  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:58.109657  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:57.717805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.219041  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:56.616853  433881 cri.go:89] found id: ""
	I0408 12:49:56.616885  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.616906  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:56.616915  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:56.616988  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:56.659097  433881 cri.go:89] found id: ""
	I0408 12:49:56.659125  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.659133  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:56.659139  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:56.659190  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:56.699222  433881 cri.go:89] found id: ""
	I0408 12:49:56.699262  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.699274  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:56.699283  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:56.699345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:56.747017  433881 cri.go:89] found id: ""
	I0408 12:49:56.747055  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.747068  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:56.747076  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:56.747149  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:56.784988  433881 cri.go:89] found id: ""
	I0408 12:49:56.785028  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.785042  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:56.785058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:56.785126  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:56.830280  433881 cri.go:89] found id: ""
	I0408 12:49:56.830320  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.830332  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:56.830340  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:56.830410  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:56.868643  433881 cri.go:89] found id: ""
	I0408 12:49:56.868678  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.868686  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:56.868697  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:56.868713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:56.922497  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:56.922542  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:56.940550  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:56.940596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:57.018640  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:57.018665  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:57.018680  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.096626  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:57.096681  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:59.638585  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:59.652384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:59.652466  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:59.692778  433881 cri.go:89] found id: ""
	I0408 12:49:59.692823  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.692837  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:59.692846  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:59.692906  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:59.732896  433881 cri.go:89] found id: ""
	I0408 12:49:59.732923  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.732933  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:59.732940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:59.732999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:59.774774  433881 cri.go:89] found id: ""
	I0408 12:49:59.774806  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.774814  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:59.774819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:59.774870  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:59.812919  433881 cri.go:89] found id: ""
	I0408 12:49:59.812959  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.812972  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:59.812980  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:59.813043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:59.848653  433881 cri.go:89] found id: ""
	I0408 12:49:59.848684  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.848695  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:59.848703  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:59.848772  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:59.883495  433881 cri.go:89] found id: ""
	I0408 12:49:59.883525  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.883537  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:59.883546  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:59.883625  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:59.925080  433881 cri.go:89] found id: ""
	I0408 12:49:59.925113  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.925122  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:59.925129  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:59.925182  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:59.967101  433881 cri.go:89] found id: ""
	I0408 12:49:59.967130  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.967142  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:59.967152  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:59.967163  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:00.010507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:00.010546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:00.063139  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:00.063182  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:00.079229  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:00.079266  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:00.155202  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:00.155235  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:00.155253  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.430097  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:59.430226  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.605915  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:03.106990  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.717304  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.717757  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.738934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:02.752509  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:02.752593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:02.791178  433881 cri.go:89] found id: ""
	I0408 12:50:02.791212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.791222  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:02.791229  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:02.791301  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:02.834180  433881 cri.go:89] found id: ""
	I0408 12:50:02.834212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.834225  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:02.834234  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:02.834296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:02.873513  433881 cri.go:89] found id: ""
	I0408 12:50:02.873551  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.873563  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:02.873573  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:02.873651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:02.921329  433881 cri.go:89] found id: ""
	I0408 12:50:02.921371  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.921384  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:02.921392  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:02.921517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:02.959940  433881 cri.go:89] found id: ""
	I0408 12:50:02.959970  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.959980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:02.959988  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:02.960120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:03.001222  433881 cri.go:89] found id: ""
	I0408 12:50:03.001251  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.001259  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:03.001265  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:03.001317  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:03.043627  433881 cri.go:89] found id: ""
	I0408 12:50:03.043656  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.043666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:03.043671  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:03.043750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:03.083603  433881 cri.go:89] found id: ""
	I0408 12:50:03.083640  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.083649  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:03.083660  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:03.083674  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:03.138300  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:03.138343  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:03.153439  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:03.153476  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:03.230230  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:03.230258  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:03.230277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:03.312005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:03.312048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:05.851000  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:05.865533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:05.865601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:05.905449  433881 cri.go:89] found id: ""
	I0408 12:50:05.905485  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.905495  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:05.905501  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:05.905570  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:05.952260  433881 cri.go:89] found id: ""
	I0408 12:50:05.952293  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.952305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:05.952313  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:05.952384  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:05.993398  433881 cri.go:89] found id: ""
	I0408 12:50:05.993430  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.993440  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:05.993446  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:05.993512  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:06.031484  433881 cri.go:89] found id: ""
	I0408 12:50:06.031527  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.031539  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:06.031551  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:06.031613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:06.067855  433881 cri.go:89] found id: ""
	I0408 12:50:06.067897  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.067910  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:06.067920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:06.067992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:06.108905  433881 cri.go:89] found id: ""
	I0408 12:50:06.108937  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.108949  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:06.108958  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:06.109010  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:06.147629  433881 cri.go:89] found id: ""
	I0408 12:50:06.147664  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.147674  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:06.147683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:06.147760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:06.184250  433881 cri.go:89] found id: ""
	I0408 12:50:06.184287  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.184298  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:06.184312  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:06.184329  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:06.239560  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:06.239606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:06.254746  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:06.254777  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:06.330423  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:06.330453  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:06.330471  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:06.410965  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:06.411017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:01.930407  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.429884  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:06.430557  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:05.605804  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.606737  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:10.107370  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.218275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:09.716548  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:08.958108  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:08.972557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:08.972626  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:09.026034  433881 cri.go:89] found id: ""
	I0408 12:50:09.026073  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.026081  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:09.026094  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:09.026145  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:09.063360  433881 cri.go:89] found id: ""
	I0408 12:50:09.063399  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.063411  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:09.063420  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:09.063509  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:09.101002  433881 cri.go:89] found id: ""
	I0408 12:50:09.101030  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.101039  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:09.101045  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:09.101104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:09.140794  433881 cri.go:89] found id: ""
	I0408 12:50:09.140830  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.140843  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:09.140852  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:09.140912  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:09.176889  433881 cri.go:89] found id: ""
	I0408 12:50:09.176927  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.176939  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:09.176947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:09.177013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:09.218687  433881 cri.go:89] found id: ""
	I0408 12:50:09.218719  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.218730  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:09.218739  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:09.218819  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:09.254509  433881 cri.go:89] found id: ""
	I0408 12:50:09.254542  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.254551  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:09.254557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:09.254619  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:09.291313  433881 cri.go:89] found id: ""
	I0408 12:50:09.291341  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.291349  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:09.291359  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:09.291382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:09.342578  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:09.342625  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:09.359207  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:09.359236  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:09.434921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:09.434945  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:09.434962  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:09.526672  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:09.526726  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:08.930029  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.429317  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.107556  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:14.606578  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.717001  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:13.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.719875  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.075428  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:12.089920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:12.089986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:12.128791  433881 cri.go:89] found id: ""
	I0408 12:50:12.128878  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.128895  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:12.128905  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:12.128979  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:12.166911  433881 cri.go:89] found id: ""
	I0408 12:50:12.166939  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.166947  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:12.166954  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:12.167005  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:12.205798  433881 cri.go:89] found id: ""
	I0408 12:50:12.205830  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.205839  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:12.205847  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:12.205905  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:12.242716  433881 cri.go:89] found id: ""
	I0408 12:50:12.242754  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.242764  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:12.242771  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:12.242825  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:12.279061  433881 cri.go:89] found id: ""
	I0408 12:50:12.279098  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.279109  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:12.279118  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:12.279187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:12.319510  433881 cri.go:89] found id: ""
	I0408 12:50:12.319538  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.319547  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:12.319554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:12.319610  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:12.357578  433881 cri.go:89] found id: ""
	I0408 12:50:12.357613  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.357625  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:12.357634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:12.357699  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:12.402895  433881 cri.go:89] found id: ""
	I0408 12:50:12.402931  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.402944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:12.402958  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:12.402975  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:12.455885  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:12.455929  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:12.472119  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:12.472160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:12.551019  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:12.551041  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:12.551054  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:12.633560  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:12.633606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.176459  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:15.191013  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:15.191083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:15.243825  433881 cri.go:89] found id: ""
	I0408 12:50:15.243852  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.243861  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:15.243867  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:15.243918  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:15.282768  433881 cri.go:89] found id: ""
	I0408 12:50:15.282803  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.282816  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:15.282824  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:15.282893  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:15.318418  433881 cri.go:89] found id: ""
	I0408 12:50:15.318447  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.318455  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:15.318463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:15.318540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:15.354071  433881 cri.go:89] found id: ""
	I0408 12:50:15.354109  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.354125  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:15.354133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:15.354205  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:15.397142  433881 cri.go:89] found id: ""
	I0408 12:50:15.397176  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.397185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:15.397191  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:15.397253  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:15.436798  433881 cri.go:89] found id: ""
	I0408 12:50:15.436832  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.436843  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:15.436851  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:15.436916  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:15.475792  433881 cri.go:89] found id: ""
	I0408 12:50:15.475823  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.475836  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:15.475844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:15.475917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:15.526277  433881 cri.go:89] found id: ""
	I0408 12:50:15.526323  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.526335  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:15.526348  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:15.526365  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:15.601590  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:15.601616  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:15.601631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:15.681784  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:15.681842  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.725300  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:15.725345  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:15.778579  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:15.778627  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:13.429712  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.430255  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:17.106153  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:19.607656  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.217812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.719543  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.296690  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:18.310554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:18.310623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:18.350635  433881 cri.go:89] found id: ""
	I0408 12:50:18.350673  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.350685  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:18.350693  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:18.350756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:18.391943  433881 cri.go:89] found id: ""
	I0408 12:50:18.391974  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.391984  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:18.391990  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:18.392059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:18.433191  433881 cri.go:89] found id: ""
	I0408 12:50:18.433226  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.433237  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:18.433246  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:18.433310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:18.471600  433881 cri.go:89] found id: ""
	I0408 12:50:18.471629  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.471641  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:18.471649  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:18.471737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:18.507180  433881 cri.go:89] found id: ""
	I0408 12:50:18.507219  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.507228  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:18.507242  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:18.507307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:18.553894  433881 cri.go:89] found id: ""
	I0408 12:50:18.553924  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.553939  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:18.553947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:18.554013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:18.593823  433881 cri.go:89] found id: ""
	I0408 12:50:18.593860  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.593870  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:18.593878  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:18.593934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:18.636636  433881 cri.go:89] found id: ""
	I0408 12:50:18.636667  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.636679  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:18.636692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:18.636709  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:18.690597  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:18.690640  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:18.706484  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:18.706537  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:18.795390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:18.795419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:18.795434  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:18.873458  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:18.873518  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:21.420942  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:21.436200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:21.436262  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:21.473194  433881 cri.go:89] found id: ""
	I0408 12:50:21.473228  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.473237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:21.473244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:21.473297  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:21.510496  433881 cri.go:89] found id: ""
	I0408 12:50:21.510534  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.510547  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:21.510556  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:21.510618  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:21.550290  433881 cri.go:89] found id: ""
	I0408 12:50:21.550329  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.550337  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:21.550344  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:21.550399  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:21.586192  433881 cri.go:89] found id: ""
	I0408 12:50:21.586229  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.586241  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:21.586252  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:21.586316  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:17.930126  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.430210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:22.107118  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.107812  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:23.217266  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:25.218476  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:21.645888  433881 cri.go:89] found id: ""
	I0408 12:50:21.645925  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.645937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:21.645945  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:21.646012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:21.710384  433881 cri.go:89] found id: ""
	I0408 12:50:21.710416  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.710429  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:21.710437  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:21.710503  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:21.773423  433881 cri.go:89] found id: ""
	I0408 12:50:21.773458  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.773467  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:21.773473  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:21.773536  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:21.814353  433881 cri.go:89] found id: ""
	I0408 12:50:21.814389  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.814401  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:21.814415  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:21.814437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:21.866744  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:21.866783  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:21.883577  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:21.883617  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:21.963339  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:21.963362  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:21.963379  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:22.044959  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:22.045017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:24.589027  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:24.603707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:24.603797  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:24.648525  433881 cri.go:89] found id: ""
	I0408 12:50:24.648566  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.648579  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:24.648587  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:24.648656  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:24.693788  433881 cri.go:89] found id: ""
	I0408 12:50:24.693827  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.693840  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:24.693850  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:24.693925  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:24.734461  433881 cri.go:89] found id: ""
	I0408 12:50:24.734499  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.734507  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:24.734514  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:24.734578  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:24.781723  433881 cri.go:89] found id: ""
	I0408 12:50:24.781759  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.781772  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:24.781780  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:24.781850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:24.823060  433881 cri.go:89] found id: ""
	I0408 12:50:24.823091  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.823101  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:24.823109  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:24.823195  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:24.858847  433881 cri.go:89] found id: ""
	I0408 12:50:24.858887  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.858899  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:24.858913  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:24.858968  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:24.899075  433881 cri.go:89] found id: ""
	I0408 12:50:24.899113  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.899125  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:24.899133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:24.899216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:24.941839  433881 cri.go:89] found id: ""
	I0408 12:50:24.941876  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.941886  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:24.941897  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:24.941911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:24.993358  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:24.993402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:25.010857  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:25.010892  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:25.098985  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:25.099017  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:25.099035  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:25.179115  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:25.179172  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:22.928804  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.930608  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:26.607216  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:28.608092  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.717812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:30.218079  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.726080  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:27.740646  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:27.740739  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:27.781567  433881 cri.go:89] found id: ""
	I0408 12:50:27.781612  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.781623  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:27.781630  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:27.781696  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:27.823034  433881 cri.go:89] found id: ""
	I0408 12:50:27.823077  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.823090  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:27.823099  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:27.823174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:27.862738  433881 cri.go:89] found id: ""
	I0408 12:50:27.862797  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.862822  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:27.862832  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:27.862917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:27.905821  433881 cri.go:89] found id: ""
	I0408 12:50:27.905862  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.905874  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:27.905884  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:27.905954  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:27.949580  433881 cri.go:89] found id: ""
	I0408 12:50:27.949613  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.949625  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:27.949634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:27.949721  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:27.989453  433881 cri.go:89] found id: ""
	I0408 12:50:27.989488  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.989496  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:27.989502  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:27.989560  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:28.031983  433881 cri.go:89] found id: ""
	I0408 12:50:28.032015  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.032027  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:28.032035  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:28.032114  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:28.072851  433881 cri.go:89] found id: ""
	I0408 12:50:28.072884  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.072895  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:28.072910  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:28.072927  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:28.116117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:28.116160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:28.170098  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:28.170142  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:28.184820  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:28.184860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:28.261324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:28.261355  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:28.261384  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:30.837906  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:30.853871  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:30.853969  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:30.896197  433881 cri.go:89] found id: ""
	I0408 12:50:30.896228  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.896237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:30.896244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:30.896296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:30.938689  433881 cri.go:89] found id: ""
	I0408 12:50:30.938726  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.938740  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:30.938758  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:30.938840  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:30.980883  433881 cri.go:89] found id: ""
	I0408 12:50:30.980918  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.980929  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:30.980937  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:30.981008  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:31.018262  433881 cri.go:89] found id: ""
	I0408 12:50:31.018291  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.018305  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:31.018314  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:31.018382  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:31.055397  433881 cri.go:89] found id: ""
	I0408 12:50:31.055430  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.055443  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:31.055452  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:31.055527  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:31.091476  433881 cri.go:89] found id: ""
	I0408 12:50:31.091511  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.091523  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:31.091531  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:31.091583  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:31.130285  433881 cri.go:89] found id: ""
	I0408 12:50:31.130326  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.130337  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:31.130345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:31.130419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:31.168196  433881 cri.go:89] found id: ""
	I0408 12:50:31.168227  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.168236  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:31.168246  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:31.168258  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:31.220612  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:31.220652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:31.236718  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:31.236754  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:31.310550  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:31.310574  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:31.310588  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:31.387376  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:31.387420  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:27.429985  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:29.928718  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:31.106901  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.606293  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:32.717659  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.217468  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.932307  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:33.946664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:33.946754  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:33.991321  433881 cri.go:89] found id: ""
	I0408 12:50:33.991359  433881 logs.go:276] 0 containers: []
	W0408 12:50:33.991371  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:33.991381  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:33.991451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:34.033989  433881 cri.go:89] found id: ""
	I0408 12:50:34.034024  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.034034  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:34.034041  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:34.034125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:34.081140  433881 cri.go:89] found id: ""
	I0408 12:50:34.081183  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.081192  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:34.081199  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:34.081258  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:34.122332  433881 cri.go:89] found id: ""
	I0408 12:50:34.122365  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.122376  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:34.122384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:34.122451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:34.161307  433881 cri.go:89] found id: ""
	I0408 12:50:34.161353  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.161378  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:34.161387  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:34.161460  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:34.199664  433881 cri.go:89] found id: ""
	I0408 12:50:34.199715  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.199727  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:34.199736  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:34.199816  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:34.242044  433881 cri.go:89] found id: ""
	I0408 12:50:34.242077  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.242087  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:34.242094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:34.242159  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:34.277852  433881 cri.go:89] found id: ""
	I0408 12:50:34.277893  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.277908  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:34.277920  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:34.277940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:34.329572  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:34.329614  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:34.343823  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:34.343854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:34.422625  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:34.422652  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:34.422670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:34.504605  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:34.504653  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:31.928982  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.929758  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.930610  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:36.110235  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:38.606389  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.217645  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:39.218104  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.050790  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:37.065111  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:37.065179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:37.108541  433881 cri.go:89] found id: ""
	I0408 12:50:37.108573  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.108583  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:37.108590  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:37.108655  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:37.145207  433881 cri.go:89] found id: ""
	I0408 12:50:37.145241  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.145256  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:37.145264  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:37.145332  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:37.182788  433881 cri.go:89] found id: ""
	I0408 12:50:37.182823  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.182836  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:37.182844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:37.182917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:37.222780  433881 cri.go:89] found id: ""
	I0408 12:50:37.222804  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.222813  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:37.222819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:37.222884  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:37.261653  433881 cri.go:89] found id: ""
	I0408 12:50:37.261703  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.261715  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:37.261725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:37.261795  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:37.300613  433881 cri.go:89] found id: ""
	I0408 12:50:37.300642  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.300651  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:37.300657  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:37.300720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:37.344252  433881 cri.go:89] found id: ""
	I0408 12:50:37.344289  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.344302  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:37.344311  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:37.344380  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:37.382644  433881 cri.go:89] found id: ""
	I0408 12:50:37.382682  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.382695  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:37.382708  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:37.382725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:37.437205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:37.437248  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:37.451772  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:37.451806  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:37.535578  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:37.535604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:37.535618  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:37.618315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:37.618358  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.160025  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:40.173704  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:40.173770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:40.212527  433881 cri.go:89] found id: ""
	I0408 12:50:40.212564  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.212576  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:40.212584  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:40.212648  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:40.250802  433881 cri.go:89] found id: ""
	I0408 12:50:40.250833  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.250841  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:40.250848  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:40.250910  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:40.292534  433881 cri.go:89] found id: ""
	I0408 12:50:40.292565  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.292576  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:40.292584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:40.292641  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:40.329973  433881 cri.go:89] found id: ""
	I0408 12:50:40.330004  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.330017  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:40.330027  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:40.330083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:40.367351  433881 cri.go:89] found id: ""
	I0408 12:50:40.367381  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.367390  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:40.367397  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:40.367462  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:40.404499  433881 cri.go:89] found id: ""
	I0408 12:50:40.404535  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.404546  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:40.404556  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:40.404624  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:40.448208  433881 cri.go:89] found id: ""
	I0408 12:50:40.448244  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.448254  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:40.448263  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:40.448318  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:40.490191  433881 cri.go:89] found id: ""
	I0408 12:50:40.490225  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.490235  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:40.490246  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:40.490262  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:40.507079  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:40.507119  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:40.584844  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:40.584880  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:40.584905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:40.665416  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:40.665461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.710289  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:40.710331  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:38.429765  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.430575  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.607976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.106175  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:45.107548  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:41.716953  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.717149  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.267848  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:43.283094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:43.283192  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:43.321609  433881 cri.go:89] found id: ""
	I0408 12:50:43.321643  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.321655  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:43.321664  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:43.321732  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:43.361550  433881 cri.go:89] found id: ""
	I0408 12:50:43.361587  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.361599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:43.361608  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:43.361686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:43.398332  433881 cri.go:89] found id: ""
	I0408 12:50:43.398373  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.398386  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:43.398394  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:43.398463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:43.436808  433881 cri.go:89] found id: ""
	I0408 12:50:43.436836  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.436844  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:43.436850  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:43.436901  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:43.475475  433881 cri.go:89] found id: ""
	I0408 12:50:43.475512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.475524  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:43.475533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:43.475600  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:43.515481  433881 cri.go:89] found id: ""
	I0408 12:50:43.515512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.515521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:43.515530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:43.515599  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:43.555358  433881 cri.go:89] found id: ""
	I0408 12:50:43.555388  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.555410  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:43.555420  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:43.555476  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:43.590192  433881 cri.go:89] found id: ""
	I0408 12:50:43.590240  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.590253  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:43.590265  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:43.590281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:43.643642  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:43.643699  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:43.659375  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:43.659405  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:43.739721  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:43.739743  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:43.739760  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:43.821107  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:43.821152  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:46.364937  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:46.378208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:46.378295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:46.415217  433881 cri.go:89] found id: ""
	I0408 12:50:46.415251  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.415263  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:46.415272  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:46.415336  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:46.453886  433881 cri.go:89] found id: ""
	I0408 12:50:46.453921  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.453930  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:46.453936  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:46.453992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:46.491443  433881 cri.go:89] found id: ""
	I0408 12:50:46.491475  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.491488  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:46.491496  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:46.491565  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:46.535815  433881 cri.go:89] found id: ""
	I0408 12:50:46.535845  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.535854  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:46.535860  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:46.535921  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:46.577704  433881 cri.go:89] found id: ""
	I0408 12:50:46.577814  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.577826  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:46.577835  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:46.577915  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:42.928908  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:44.929425  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:47.606676  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.608190  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.217528  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:48.717623  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:50.729538  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.624693  433881 cri.go:89] found id: ""
	I0408 12:50:46.624723  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.624731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:46.624738  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:46.624791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:46.659410  433881 cri.go:89] found id: ""
	I0408 12:50:46.659462  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.659474  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:46.659482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:46.659547  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:46.694881  433881 cri.go:89] found id: ""
	I0408 12:50:46.694912  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.694926  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:46.694937  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:46.694954  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:46.751416  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:46.751464  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:46.767739  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:46.767779  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:46.854021  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:46.854050  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:46.854066  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.937214  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:46.937252  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.479829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:49.494083  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:49.494156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:49.532518  433881 cri.go:89] found id: ""
	I0408 12:50:49.532555  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.532563  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:49.532569  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:49.532632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:49.571054  433881 cri.go:89] found id: ""
	I0408 12:50:49.571086  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.571111  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:49.571119  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:49.571199  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:49.607025  433881 cri.go:89] found id: ""
	I0408 12:50:49.607061  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.607071  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:49.607080  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:49.607156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:49.646890  433881 cri.go:89] found id: ""
	I0408 12:50:49.646921  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.646930  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:49.646939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:49.647009  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:49.688671  433881 cri.go:89] found id: ""
	I0408 12:50:49.688707  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.688719  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:49.688728  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:49.688800  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:49.726687  433881 cri.go:89] found id: ""
	I0408 12:50:49.726724  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.726735  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:49.726741  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:49.726808  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:49.767311  433881 cri.go:89] found id: ""
	I0408 12:50:49.767344  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.767353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:49.767360  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:49.767414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:49.803409  433881 cri.go:89] found id: ""
	I0408 12:50:49.803442  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.803452  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:49.803463  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:49.803478  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.842738  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:49.842767  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:49.895264  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:49.895318  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:49.910300  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:49.910332  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:50.005994  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:50.006031  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:50.006048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.929626  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.429810  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.106861  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.608143  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:53.217707  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:55.718120  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.589266  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:52.603202  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:52.603308  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:52.640493  433881 cri.go:89] found id: ""
	I0408 12:50:52.640525  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.640540  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:52.640550  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:52.640613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:52.680230  433881 cri.go:89] found id: ""
	I0408 12:50:52.680271  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.680284  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:52.680293  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:52.680355  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:52.724048  433881 cri.go:89] found id: ""
	I0408 12:50:52.724084  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.724096  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:52.724104  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:52.724171  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:52.776926  433881 cri.go:89] found id: ""
	I0408 12:50:52.776960  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.776973  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:52.776982  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:52.777059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:52.814738  433881 cri.go:89] found id: ""
	I0408 12:50:52.814770  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.814781  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:52.814788  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:52.814842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:52.854463  433881 cri.go:89] found id: ""
	I0408 12:50:52.854501  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.854511  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:52.854521  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:52.854591  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:52.896180  433881 cri.go:89] found id: ""
	I0408 12:50:52.896209  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.896218  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:52.896224  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:52.896279  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:52.931890  433881 cri.go:89] found id: ""
	I0408 12:50:52.931932  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.931944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:52.931956  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:52.931973  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:53.013345  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:53.013368  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:53.013385  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:53.092792  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:53.092834  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:53.142678  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:53.142713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:53.196378  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:53.196429  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:55.713265  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:55.729253  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:55.729341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:55.772259  433881 cri.go:89] found id: ""
	I0408 12:50:55.772303  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.772317  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:55.772325  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:55.772398  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:55.816146  433881 cri.go:89] found id: ""
	I0408 12:50:55.816178  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.816188  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:55.816194  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:55.816247  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:55.857896  433881 cri.go:89] found id: ""
	I0408 12:50:55.857935  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.857947  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:55.857955  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:55.858025  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:55.896337  433881 cri.go:89] found id: ""
	I0408 12:50:55.896374  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.896386  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:55.896395  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:55.896463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:55.936373  433881 cri.go:89] found id: ""
	I0408 12:50:55.936419  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.936430  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:55.936439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:55.936514  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:55.996751  433881 cri.go:89] found id: ""
	I0408 12:50:55.996782  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.996793  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:55.996802  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:55.996866  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:56.038910  433881 cri.go:89] found id: ""
	I0408 12:50:56.038948  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.038956  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:56.038962  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:56.039018  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:56.078147  433881 cri.go:89] found id: ""
	I0408 12:50:56.078185  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.078195  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:56.078206  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:56.078223  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:56.137679  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:56.137725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:56.153067  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:56.153101  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:56.242398  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:56.242422  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:56.242436  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:56.325353  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:56.325402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:51.929891  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.430216  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:57.106572  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.108219  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.216315  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:00.218162  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.867789  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:58.881570  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:58.881640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:58.918941  433881 cri.go:89] found id: ""
	I0408 12:50:58.918971  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.918980  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:58.918987  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:58.919041  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:58.956339  433881 cri.go:89] found id: ""
	I0408 12:50:58.956375  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.956387  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:58.956395  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:58.956448  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:58.998045  433881 cri.go:89] found id: ""
	I0408 12:50:58.998075  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.998087  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:58.998113  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:58.998186  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:59.037694  433881 cri.go:89] found id: ""
	I0408 12:50:59.037724  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.037736  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:59.037744  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:59.037813  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:59.079404  433881 cri.go:89] found id: ""
	I0408 12:50:59.079436  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.079448  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:59.079458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:59.079525  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:59.117535  433881 cri.go:89] found id: ""
	I0408 12:50:59.117566  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.117585  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:59.117593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:59.117661  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:59.163144  433881 cri.go:89] found id: ""
	I0408 12:50:59.163177  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.163190  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:59.163200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:59.163295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:59.201778  433881 cri.go:89] found id: ""
	I0408 12:50:59.201815  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.201827  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:59.201840  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:59.201857  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:59.256688  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:59.256730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:59.272631  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:59.272670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:59.345194  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:59.345219  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:59.345233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:59.420807  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:59.420873  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:56.931254  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.429578  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.606793  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.105581  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:02.218796  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.718232  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.966779  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:01.992790  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:01.992868  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:02.032532  433881 cri.go:89] found id: ""
	I0408 12:51:02.032580  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.032592  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:02.032603  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:02.032684  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:02.070377  433881 cri.go:89] found id: ""
	I0408 12:51:02.070405  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.070412  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:02.070418  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:02.070481  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:02.109543  433881 cri.go:89] found id: ""
	I0408 12:51:02.109569  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.109577  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:02.109584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:02.109639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:02.148009  433881 cri.go:89] found id: ""
	I0408 12:51:02.148049  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.148062  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:02.148078  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:02.148144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:02.184318  433881 cri.go:89] found id: ""
	I0408 12:51:02.184351  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.184362  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:02.184371  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:02.184469  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:02.225491  433881 cri.go:89] found id: ""
	I0408 12:51:02.225534  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.225545  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:02.225554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:02.225628  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:02.269401  433881 cri.go:89] found id: ""
	I0408 12:51:02.269439  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.269447  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:02.269454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:02.269517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:02.310153  433881 cri.go:89] found id: ""
	I0408 12:51:02.310189  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.310197  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:02.310209  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:02.310224  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:02.326077  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:02.326111  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:02.402369  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:02.402394  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:02.402410  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:02.483819  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:02.483866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:02.527581  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:02.527628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:05.083167  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:05.097986  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:05.098063  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:05.139396  433881 cri.go:89] found id: ""
	I0408 12:51:05.139434  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.139446  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:05.139464  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:05.139568  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:05.176882  433881 cri.go:89] found id: ""
	I0408 12:51:05.176918  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.176931  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:05.176940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:05.177012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:05.216426  433881 cri.go:89] found id: ""
	I0408 12:51:05.216459  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.216478  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:05.216486  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:05.216598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:05.254724  433881 cri.go:89] found id: ""
	I0408 12:51:05.254754  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.254762  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:05.254768  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:05.254821  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:05.291361  433881 cri.go:89] found id: ""
	I0408 12:51:05.291388  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.291397  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:05.291403  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:05.291453  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:05.329102  433881 cri.go:89] found id: ""
	I0408 12:51:05.329134  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.329145  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:05.329152  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:05.329216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:05.368614  433881 cri.go:89] found id: ""
	I0408 12:51:05.368657  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.368666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:05.368674  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:05.368727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:05.412151  433881 cri.go:89] found id: ""
	I0408 12:51:05.412182  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.412196  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:05.412208  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:05.412227  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:05.428329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:05.428364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:05.509452  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:05.509481  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:05.509500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:05.586831  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:05.586882  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:05.636175  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:05.636213  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:01.929336  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:03.929754  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.429604  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.106159  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.608247  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:07.216779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:09.217275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.189786  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:08.205609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:08.205686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:08.256556  433881 cri.go:89] found id: ""
	I0408 12:51:08.256586  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.256597  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:08.256607  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:08.256664  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:08.309126  433881 cri.go:89] found id: ""
	I0408 12:51:08.309163  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.309176  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:08.309184  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:08.309259  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:08.350669  433881 cri.go:89] found id: ""
	I0408 12:51:08.350699  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.350708  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:08.350716  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:08.350766  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:08.392122  433881 cri.go:89] found id: ""
	I0408 12:51:08.392156  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.392164  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:08.392171  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:08.392223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:08.435571  433881 cri.go:89] found id: ""
	I0408 12:51:08.435603  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.435616  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:08.435624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:08.435708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.474285  433881 cri.go:89] found id: ""
	I0408 12:51:08.474322  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.474334  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:08.474345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:08.474419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:08.521060  433881 cri.go:89] found id: ""
	I0408 12:51:08.521101  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.521109  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:08.521116  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:08.521185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:08.559967  433881 cri.go:89] found id: ""
	I0408 12:51:08.560013  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.560026  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:08.560051  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:08.560068  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:08.614926  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:08.614966  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:08.639012  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:08.639059  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:08.755572  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:08.755604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:08.755621  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:08.836005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:08.836050  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:11.383048  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:11.397692  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:11.397763  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:11.439445  433881 cri.go:89] found id: ""
	I0408 12:51:11.439482  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.439494  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:11.439503  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:11.439558  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:11.478262  433881 cri.go:89] found id: ""
	I0408 12:51:11.478297  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.478309  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:11.478318  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:11.478392  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:11.518012  433881 cri.go:89] found id: ""
	I0408 12:51:11.518049  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.518063  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:11.518071  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:11.518137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:11.557519  433881 cri.go:89] found id: ""
	I0408 12:51:11.557551  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.557563  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:11.557571  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:11.557644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:11.595494  433881 cri.go:89] found id: ""
	I0408 12:51:11.595528  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.595541  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:11.595550  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:11.595622  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.929238  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:10.929862  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.107603  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.611978  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.718498  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.635667  433881 cri.go:89] found id: ""
	I0408 12:51:11.635719  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.635731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:11.635740  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:11.635806  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:11.675521  433881 cri.go:89] found id: ""
	I0408 12:51:11.675553  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.675562  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:11.675568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:11.675623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:11.720983  433881 cri.go:89] found id: ""
	I0408 12:51:11.721016  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.721029  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:11.721041  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:11.721055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:11.775418  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:11.775462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:11.790019  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:11.790061  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:11.867479  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:11.867512  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:11.867530  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:11.944546  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:11.944594  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:14.487829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:14.501277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:14.501356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:14.539996  433881 cri.go:89] found id: ""
	I0408 12:51:14.540031  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.540043  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:14.540054  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:14.540125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:14.580611  433881 cri.go:89] found id: ""
	I0408 12:51:14.580646  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.580658  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:14.580667  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:14.580729  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:14.623459  433881 cri.go:89] found id: ""
	I0408 12:51:14.623497  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.623509  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:14.623518  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:14.623593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:14.666904  433881 cri.go:89] found id: ""
	I0408 12:51:14.666944  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.666953  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:14.666959  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:14.667012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:14.709136  433881 cri.go:89] found id: ""
	I0408 12:51:14.709169  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.709178  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:14.709183  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:14.709234  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:14.757342  433881 cri.go:89] found id: ""
	I0408 12:51:14.757377  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.757390  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:14.757402  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:14.757477  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:14.795210  433881 cri.go:89] found id: ""
	I0408 12:51:14.795249  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.795262  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:14.795271  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:14.795329  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:14.833782  433881 cri.go:89] found id: ""
	I0408 12:51:14.833813  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.833821  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:14.833831  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:14.833843  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:14.892985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:14.893030  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:14.909567  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:14.909615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:14.988447  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:14.988473  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:14.988494  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:15.068404  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:15.068446  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:12.931867  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:15.430299  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.106552  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.106622  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.108053  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.217595  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.217758  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.220115  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:17.617145  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:17.630439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:17.630520  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:17.672814  433881 cri.go:89] found id: ""
	I0408 12:51:17.672845  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.672853  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:17.672860  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:17.672936  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:17.715344  433881 cri.go:89] found id: ""
	I0408 12:51:17.715378  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.715391  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:17.715399  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:17.715464  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:17.757246  433881 cri.go:89] found id: ""
	I0408 12:51:17.757283  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.757295  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:17.757304  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:17.757373  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:17.798201  433881 cri.go:89] found id: ""
	I0408 12:51:17.798236  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.798245  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:17.798250  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:17.798312  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:17.838243  433881 cri.go:89] found id: ""
	I0408 12:51:17.838280  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.838296  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:17.838305  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:17.838376  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:17.877394  433881 cri.go:89] found id: ""
	I0408 12:51:17.877433  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.877446  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:17.877455  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:17.877522  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:17.917513  433881 cri.go:89] found id: ""
	I0408 12:51:17.917546  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.917557  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:17.917564  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:17.917631  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:17.959806  433881 cri.go:89] found id: ""
	I0408 12:51:17.959841  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.959854  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:17.959872  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:17.959888  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:17.974835  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:17.974866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:18.051066  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:18.051096  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:18.051110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:18.130246  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:18.130294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:18.177977  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:18.178009  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:20.732943  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:20.747177  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:20.747250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:20.793434  433881 cri.go:89] found id: ""
	I0408 12:51:20.793462  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.793472  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:20.793478  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:20.793554  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:20.830880  433881 cri.go:89] found id: ""
	I0408 12:51:20.830915  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.830925  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:20.830931  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:20.830986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:20.865660  433881 cri.go:89] found id: ""
	I0408 12:51:20.865698  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.865710  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:20.865718  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:20.865787  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:20.905977  433881 cri.go:89] found id: ""
	I0408 12:51:20.906009  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.906018  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:20.906023  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:20.906078  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:20.949244  433881 cri.go:89] found id: ""
	I0408 12:51:20.949273  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.949281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:20.949288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:20.949346  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:20.987438  433881 cri.go:89] found id: ""
	I0408 12:51:20.987466  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.987475  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:20.987482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:20.987534  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:21.028061  433881 cri.go:89] found id: ""
	I0408 12:51:21.028106  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.028123  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:21.028130  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:21.028187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:21.065115  433881 cri.go:89] found id: ""
	I0408 12:51:21.065147  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.065160  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:21.065171  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:21.065186  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:21.142100  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:21.142143  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:21.186259  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:21.186294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:21.242038  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:21.242085  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:21.257483  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:21.257526  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:21.336027  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:17.930896  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.430609  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.108741  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.605215  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.716480  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.720217  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:23.836494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:23.850931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:23.851001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:23.889352  433881 cri.go:89] found id: ""
	I0408 12:51:23.889385  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.889397  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:23.889406  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:23.889467  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:23.925240  433881 cri.go:89] found id: ""
	I0408 12:51:23.925271  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.925280  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:23.925286  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:23.925341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:23.965369  433881 cri.go:89] found id: ""
	I0408 12:51:23.965398  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.965410  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:23.965417  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:23.965478  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:24.004828  433881 cri.go:89] found id: ""
	I0408 12:51:24.004864  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.004875  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:24.004882  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:24.004955  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:24.046959  433881 cri.go:89] found id: ""
	I0408 12:51:24.046996  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.047013  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:24.047022  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:24.047104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:24.085408  433881 cri.go:89] found id: ""
	I0408 12:51:24.085447  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.085459  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:24.085468  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:24.085533  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:24.124156  433881 cri.go:89] found id: ""
	I0408 12:51:24.124193  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.124205  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:24.124214  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:24.124280  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:24.159973  433881 cri.go:89] found id: ""
	I0408 12:51:24.160011  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.160023  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:24.160037  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:24.160055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:24.238998  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:24.239047  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:24.282401  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:24.282439  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:24.339279  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:24.339328  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:24.354927  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:24.354965  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:24.432192  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:22.929962  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:25.430340  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.605294  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:28.606623  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:27.218727  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.716524  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.932361  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:26.947709  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:26.947779  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:26.992251  433881 cri.go:89] found id: ""
	I0408 12:51:26.992282  433881 logs.go:276] 0 containers: []
	W0408 12:51:26.992290  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:26.992297  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:26.992366  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:27.033517  433881 cri.go:89] found id: ""
	I0408 12:51:27.033548  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.033560  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:27.033568  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:27.033635  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:27.072593  433881 cri.go:89] found id: ""
	I0408 12:51:27.072628  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.072641  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:27.072650  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:27.072726  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:27.115728  433881 cri.go:89] found id: ""
	I0408 12:51:27.115761  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.115771  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:27.115779  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:27.115846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:27.154218  433881 cri.go:89] found id: ""
	I0408 12:51:27.154254  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.154266  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:27.154274  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:27.154341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:27.193084  433881 cri.go:89] found id: ""
	I0408 12:51:27.193118  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.193134  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:27.193142  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:27.193216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:27.233401  433881 cri.go:89] found id: ""
	I0408 12:51:27.233436  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.233449  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:27.233458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:27.233524  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:27.274272  433881 cri.go:89] found id: ""
	I0408 12:51:27.274307  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.274316  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:27.274325  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:27.274339  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:27.316918  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:27.316956  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:27.371970  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:27.372014  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.387640  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:27.387679  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:27.468583  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:27.468611  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:27.468628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.049078  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:30.063661  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:30.063750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:30.102000  433881 cri.go:89] found id: ""
	I0408 12:51:30.102031  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.102049  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:30.102058  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:30.102120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:30.144972  433881 cri.go:89] found id: ""
	I0408 12:51:30.145001  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.145010  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:30.145017  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:30.145076  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:30.185179  433881 cri.go:89] found id: ""
	I0408 12:51:30.185250  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.185274  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:30.185284  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:30.185356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:30.224138  433881 cri.go:89] found id: ""
	I0408 12:51:30.224169  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.224178  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:30.224185  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:30.224245  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:30.262754  433881 cri.go:89] found id: ""
	I0408 12:51:30.262788  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.262800  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:30.262809  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:30.262872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:30.296574  433881 cri.go:89] found id: ""
	I0408 12:51:30.296608  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.296617  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:30.296624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:30.296685  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:30.337619  433881 cri.go:89] found id: ""
	I0408 12:51:30.337653  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.337665  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:30.337672  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:30.337737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:30.378808  433881 cri.go:89] found id: ""
	I0408 12:51:30.378837  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.378849  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:30.378860  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:30.378876  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:30.462867  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:30.462895  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:30.462911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.549824  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:30.549871  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:30.594270  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:30.594302  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:30.650199  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:30.650247  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.430647  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.929105  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:30.607227  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.106814  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.106890  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:31.716747  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.718962  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.166177  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:33.181168  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:33.181277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:33.220931  433881 cri.go:89] found id: ""
	I0408 12:51:33.220960  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.220970  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:33.220976  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:33.221043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:33.267118  433881 cri.go:89] found id: ""
	I0408 12:51:33.267155  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.267168  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:33.267177  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:33.267250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:33.308486  433881 cri.go:89] found id: ""
	I0408 12:51:33.308522  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.308532  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:33.308540  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:33.308614  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:33.344735  433881 cri.go:89] found id: ""
	I0408 12:51:33.344773  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.344785  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:33.344793  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:33.344857  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:33.384130  433881 cri.go:89] found id: ""
	I0408 12:51:33.384162  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.384175  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:33.384184  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:33.384246  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:33.422187  433881 cri.go:89] found id: ""
	I0408 12:51:33.422224  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.422236  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:33.422244  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:33.422309  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:33.462281  433881 cri.go:89] found id: ""
	I0408 12:51:33.462310  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.462320  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:33.462326  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:33.462412  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:33.501273  433881 cri.go:89] found id: ""
	I0408 12:51:33.501304  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.501315  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:33.501329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:33.501347  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:33.573407  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:33.573435  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:33.573453  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:33.659573  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:33.659615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:33.712568  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:33.712600  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:33.769457  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:33.769500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.285759  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:36.302490  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:36.302576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:36.341170  433881 cri.go:89] found id: ""
	I0408 12:51:36.341204  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.341218  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:36.341227  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:36.341296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:36.380366  433881 cri.go:89] found id: ""
	I0408 12:51:36.380395  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.380403  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:36.380411  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:36.380485  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:36.428755  433881 cri.go:89] found id: ""
	I0408 12:51:36.428786  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.428795  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:36.428801  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:36.428852  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:36.473849  433881 cri.go:89] found id: ""
	I0408 12:51:36.473893  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.473921  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:36.473930  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:36.474001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:36.513922  433881 cri.go:89] found id: ""
	I0408 12:51:36.513967  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.513980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:36.513989  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:36.514059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:36.557731  433881 cri.go:89] found id: ""
	I0408 12:51:36.557768  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.557777  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:36.557784  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:36.557861  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:36.601978  433881 cri.go:89] found id: ""
	I0408 12:51:36.602010  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.602020  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:36.602031  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:36.602099  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:31.930145  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.931893  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.932546  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:37.606783  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:39.607738  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.217708  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:38.717067  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.721801  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.645189  433881 cri.go:89] found id: ""
	I0408 12:51:36.645226  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.645244  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:36.645257  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:36.645276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:36.739293  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:36.739346  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:36.786962  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:36.787001  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:36.842456  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:36.842499  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.857848  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:36.857883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:36.939227  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:39.440047  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:39.456206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:39.456304  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:39.497752  433881 cri.go:89] found id: ""
	I0408 12:51:39.497792  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.497804  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:39.497815  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:39.497882  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:39.536192  433881 cri.go:89] found id: ""
	I0408 12:51:39.536224  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.536237  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:39.536245  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:39.536315  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:39.573874  433881 cri.go:89] found id: ""
	I0408 12:51:39.573917  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.573932  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:39.573939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:39.574004  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:39.614525  433881 cri.go:89] found id: ""
	I0408 12:51:39.614562  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.614577  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:39.614585  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:39.614651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:39.654414  433881 cri.go:89] found id: ""
	I0408 12:51:39.654455  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.654467  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:39.654476  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:39.654549  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:39.691814  433881 cri.go:89] found id: ""
	I0408 12:51:39.691847  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.691860  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:39.691868  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:39.691939  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:39.735572  433881 cri.go:89] found id: ""
	I0408 12:51:39.735609  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.735622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:39.735630  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:39.735707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:39.778827  433881 cri.go:89] found id: ""
	I0408 12:51:39.778860  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.778870  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:39.778881  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:39.778894  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:39.857861  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:39.857903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:39.901597  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:39.901652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:39.955660  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:39.955730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:39.972424  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:39.972461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:40.052884  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:38.429490  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.932035  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:42.106879  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:44.607134  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:41.210350  433557 pod_ready.go:81] duration metric: took 4m0.000311819s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:41.210399  433557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0408 12:51:41.210413  433557 pod_ready.go:38] duration metric: took 4m3.201150727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:41.210464  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:51:41.210520  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:41.210591  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:41.269963  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:41.269999  433557 cri.go:89] found id: ""
	I0408 12:51:41.270010  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:41.270072  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.275411  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:41.275495  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:41.319478  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:41.319517  433557 cri.go:89] found id: ""
	I0408 12:51:41.319529  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:41.319590  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.329956  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:41.330045  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:41.380017  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:41.380049  433557 cri.go:89] found id: ""
	I0408 12:51:41.380061  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:41.380131  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.384973  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:41.385077  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:41.429757  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:41.429786  433557 cri.go:89] found id: ""
	I0408 12:51:41.429798  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:41.429863  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.435404  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:41.435488  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:41.484998  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:41.485031  433557 cri.go:89] found id: ""
	I0408 12:51:41.485042  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:41.485111  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.489802  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:41.489878  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:41.543982  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.544016  433557 cri.go:89] found id: ""
	I0408 12:51:41.544028  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:41.544096  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.548766  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:41.548836  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:41.588398  433557 cri.go:89] found id: ""
	I0408 12:51:41.588425  433557 logs.go:276] 0 containers: []
	W0408 12:51:41.588433  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:41.588439  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:41.588498  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:41.635748  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:41.635771  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:41.635775  433557 cri.go:89] found id: ""
	I0408 12:51:41.635782  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:41.635849  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.641800  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.646173  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:41.646206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.717189  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:41.717228  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:41.779618  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:41.779653  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:41.840050  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:41.840092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:41.855982  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:41.856016  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:42.016416  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:42.016455  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:42.085493  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:42.085538  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:42.132590  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:42.132626  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:42.642069  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:42.642125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:42.708516  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:42.708566  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:42.759072  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:42.759125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:42.810189  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:42.810242  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:42.855931  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:42.855971  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.396658  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.414640  433557 api_server.go:72] duration metric: took 4m14.728700184s to wait for apiserver process to appear ...
	I0408 12:51:45.414671  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:51:45.414714  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.414772  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.460983  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:45.461012  433557 cri.go:89] found id: ""
	I0408 12:51:45.461023  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:45.461102  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.466928  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.467037  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.516723  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:45.516746  433557 cri.go:89] found id: ""
	I0408 12:51:45.516755  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:45.516813  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.521315  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.521413  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.560838  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.560865  433557 cri.go:89] found id: ""
	I0408 12:51:45.560876  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:45.560926  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.565852  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.565937  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.610154  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:45.610175  433557 cri.go:89] found id: ""
	I0408 12:51:45.610183  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:45.610229  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.615014  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.615098  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.658261  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:45.658292  433557 cri.go:89] found id: ""
	I0408 12:51:45.658304  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:45.658367  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.663148  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.663242  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:45.708805  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.708838  433557 cri.go:89] found id: ""
	I0408 12:51:45.708850  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:45.708906  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.713733  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:45.713800  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:45.763432  433557 cri.go:89] found id: ""
	I0408 12:51:45.763465  433557 logs.go:276] 0 containers: []
	W0408 12:51:45.763477  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:45.763486  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:45.763555  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:45.808689  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:45.808711  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.808715  433557 cri.go:89] found id: ""
	I0408 12:51:45.808723  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:45.808782  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.813386  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.818556  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:45.818589  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:42.553021  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:42.569100  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:42.569174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:42.612835  433881 cri.go:89] found id: ""
	I0408 12:51:42.612870  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.612882  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:42.612891  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:42.612965  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:42.653224  433881 cri.go:89] found id: ""
	I0408 12:51:42.653266  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.653276  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:42.653285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:42.653351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:42.703612  433881 cri.go:89] found id: ""
	I0408 12:51:42.703648  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.703658  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:42.703664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:42.703756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:42.749765  433881 cri.go:89] found id: ""
	I0408 12:51:42.749799  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.749810  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:42.749818  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:42.749894  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:42.794008  433881 cri.go:89] found id: ""
	I0408 12:51:42.794042  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.794054  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:42.794064  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:42.794132  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:42.838099  433881 cri.go:89] found id: ""
	I0408 12:51:42.838134  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.838146  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:42.838154  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:42.838223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:42.883552  433881 cri.go:89] found id: ""
	I0408 12:51:42.883589  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.883602  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:42.883615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:42.883712  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:42.922871  433881 cri.go:89] found id: ""
	I0408 12:51:42.922899  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.922910  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:42.922922  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:42.922958  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:42.979842  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:42.979885  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:42.995164  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:42.995198  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:43.075880  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:43.075906  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:43.075940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:43.164047  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:43.164113  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:45.733586  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.749054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.749158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.793132  433881 cri.go:89] found id: ""
	I0408 12:51:45.793169  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.793181  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:45.793189  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.793257  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.834562  433881 cri.go:89] found id: ""
	I0408 12:51:45.834597  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.834608  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:45.834616  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.834686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.876365  433881 cri.go:89] found id: ""
	I0408 12:51:45.876404  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.876415  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:45.876424  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.876489  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.926205  433881 cri.go:89] found id: ""
	I0408 12:51:45.926241  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.926254  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:45.926262  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.926331  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.969462  433881 cri.go:89] found id: ""
	I0408 12:51:45.969494  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.969506  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:45.969513  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.969582  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:46.011980  433881 cri.go:89] found id: ""
	I0408 12:51:46.012008  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.012031  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:46.012040  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:46.012098  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:46.054484  433881 cri.go:89] found id: ""
	I0408 12:51:46.054522  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.054533  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:46.054542  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:46.054609  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:46.094438  433881 cri.go:89] found id: ""
	I0408 12:51:46.094468  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.094477  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:46.094486  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.094503  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:46.186390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:46.186415  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.186437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.283200  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.283240  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:46.336507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.336544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.392178  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.392221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:43.429577  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:45.431057  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:47.106109  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:48.599265  433674 pod_ready.go:81] duration metric: took 4m0.000260398s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:48.599302  433674 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:51:48.599335  433674 pod_ready.go:38] duration metric: took 4m13.995684279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:48.599373  433674 kubeadm.go:591] duration metric: took 4m22.072516751s to restartPrimaryControlPlane
	W0408 12:51:48.599529  433674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:48.599619  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:45.864458  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:45.864503  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.907964  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:45.908000  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.980082  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:45.980123  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:46.041294  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:46.041330  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:46.102117  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:46.102171  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:46.188553  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:46.188583  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:46.234191  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:46.234229  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:46.281240  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.281273  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.721047  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.721092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.781387  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.781429  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:46.797003  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.797043  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:46.917073  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.917109  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:49.481948  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:51:49.488261  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:51:49.489694  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:51:49.489726  433557 api_server.go:131] duration metric: took 4.075047023s to wait for apiserver health ...
	I0408 12:51:49.489737  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:51:49.489772  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:49.489845  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:49.535955  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.535980  433557 cri.go:89] found id: ""
	I0408 12:51:49.535990  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:49.536052  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.543143  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:49.543239  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.590041  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:49.590075  433557 cri.go:89] found id: ""
	I0408 12:51:49.590087  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:49.590155  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.595726  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.595803  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.645009  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:49.645046  433557 cri.go:89] found id: ""
	I0408 12:51:49.645057  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:49.645110  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.650243  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.650329  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.693859  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:49.693882  433557 cri.go:89] found id: ""
	I0408 12:51:49.693895  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:49.693972  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.699620  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.699709  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.755614  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:49.755646  433557 cri.go:89] found id: ""
	I0408 12:51:49.755657  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:49.755739  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.761838  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.761913  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.808919  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:49.808950  433557 cri.go:89] found id: ""
	I0408 12:51:49.808961  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:49.809040  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.813965  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.814046  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.859700  433557 cri.go:89] found id: ""
	I0408 12:51:49.859737  433557 logs.go:276] 0 containers: []
	W0408 12:51:49.859748  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.859757  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:49.859832  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:49.908020  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:49.908044  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:49.908050  433557 cri.go:89] found id: ""
	I0408 12:51:49.908060  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:49.908129  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.913034  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.919193  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:49.919233  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.984657  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.984704  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:50.003487  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:50.003526  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:50.139417  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:50.139481  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:50.240166  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:50.240206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:50.288776  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:50.288823  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:50.339222  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:50.339252  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:50.402263  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:50.402308  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:50.461894  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:50.461946  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:50.507329  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:50.507373  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:50.576851  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:50.576894  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:48.908956  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:48.932321  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:48.932414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:48.988509  433881 cri.go:89] found id: ""
	I0408 12:51:48.988542  433881 logs.go:276] 0 containers: []
	W0408 12:51:48.988554  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:48.988563  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:48.988632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.026573  433881 cri.go:89] found id: ""
	I0408 12:51:49.026605  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.026613  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:49.026618  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.026681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.072625  433881 cri.go:89] found id: ""
	I0408 12:51:49.072661  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.072675  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:49.072684  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.072748  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.120630  433881 cri.go:89] found id: ""
	I0408 12:51:49.120662  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.120674  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:49.120683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.120743  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.169189  433881 cri.go:89] found id: ""
	I0408 12:51:49.169218  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.169231  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:49.169239  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.169307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.216077  433881 cri.go:89] found id: ""
	I0408 12:51:49.216115  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.216128  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:49.216141  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.216209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.258519  433881 cri.go:89] found id: ""
	I0408 12:51:49.258556  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.258568  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.258576  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:49.258658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:49.298058  433881 cri.go:89] found id: ""
	I0408 12:51:49.298092  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.298103  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:49.298117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:49.298133  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:49.351961  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.352020  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:49.369774  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:49.369822  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:49.465570  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:49.465598  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:49.465616  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:49.551701  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:49.551753  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:47.932221  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.430702  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.947824  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:50.947878  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:51.007034  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:51.007084  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:53.563768  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:51:53.563811  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.563818  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.563824  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.563829  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.563835  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.563840  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.563850  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.563857  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.563870  433557 system_pods.go:74] duration metric: took 4.074125222s to wait for pod list to return data ...
	I0408 12:51:53.563884  433557 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:51:53.566991  433557 default_sa.go:45] found service account: "default"
	I0408 12:51:53.567015  433557 default_sa.go:55] duration metric: took 3.122873ms for default service account to be created ...
	I0408 12:51:53.567024  433557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:51:53.574517  433557 system_pods.go:86] 8 kube-system pods found
	I0408 12:51:53.574558  433557 system_pods.go:89] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.574565  433557 system_pods.go:89] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.574570  433557 system_pods.go:89] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.574575  433557 system_pods.go:89] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.574581  433557 system_pods.go:89] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.574587  433557 system_pods.go:89] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.574598  433557 system_pods.go:89] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.574605  433557 system_pods.go:89] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.574616  433557 system_pods.go:126] duration metric: took 7.585497ms to wait for k8s-apps to be running ...
	I0408 12:51:53.574629  433557 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:51:53.574720  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:53.597605  433557 system_svc.go:56] duration metric: took 22.957663ms WaitForService to wait for kubelet
	I0408 12:51:53.597658  433557 kubeadm.go:576] duration metric: took 4m22.91172229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:51:53.597683  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:51:53.601940  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:51:53.601992  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:51:53.602009  433557 node_conditions.go:105] duration metric: took 4.320913ms to run NodePressure ...
	I0408 12:51:53.602028  433557 start.go:240] waiting for startup goroutines ...
	I0408 12:51:53.602040  433557 start.go:245] waiting for cluster config update ...
	I0408 12:51:53.602060  433557 start.go:254] writing updated cluster config ...
	I0408 12:51:53.602426  433557 ssh_runner.go:195] Run: rm -f paused
	I0408 12:51:53.660257  433557 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0408 12:51:53.662533  433557 out.go:177] * Done! kubectl is now configured to use "no-preload-135234" cluster and "default" namespace by default
	I0408 12:51:52.104186  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:52.125116  433881 kubeadm.go:591] duration metric: took 4m3.004969382s to restartPrimaryControlPlane
	W0408 12:51:52.125203  433881 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:52.125233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:54.046318  433881 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.921055247s)
	I0408 12:51:54.046411  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:54.061948  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:51:54.073014  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:51:54.083545  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:51:54.083566  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:51:54.083623  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:51:54.093457  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:51:54.093541  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:51:54.104924  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:51:54.114649  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:51:54.114733  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:51:54.125143  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.135209  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:51:54.135283  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.146586  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:51:54.157676  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:51:54.157740  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:51:54.168585  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:51:54.411949  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:51:52.434513  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:54.930343  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:57.432046  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:59.436031  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:01.930142  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:03.931249  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:06.429806  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:08.929311  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:10.929707  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:13.430287  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:15.430449  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:17.933664  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:20.428983  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:21.300307  433674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.700649463s)
	I0408 12:52:21.300429  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:21.321628  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:21.334359  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:21.345697  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:21.345755  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:21.345804  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:52:21.356798  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:21.356868  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:21.368622  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:52:21.379589  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:21.379676  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:21.391211  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.401783  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:21.401874  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.413655  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:52:21.424585  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:21.424673  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:21.436887  433674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:21.495891  433674 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:21.496022  433674 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:21.667820  433674 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:21.667973  433674 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:21.668100  433674 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:21.904532  433674 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:21.906631  433674 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:21.906736  433674 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:21.906833  433674 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:21.906962  433674 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:21.907084  433674 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:21.907206  433674 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:21.907283  433674 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:21.907372  433674 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:21.907705  433674 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:21.908164  433674 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:21.908536  433674 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:21.908852  433674 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:21.908942  433674 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:22.096319  433674 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:22.286425  433674 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:22.442534  433674 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:22.542901  433674 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:22.959098  433674 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:22.959656  433674 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:22.962359  433674 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:22.965011  433674 out.go:204]   - Booting up control plane ...
	I0408 12:52:22.965148  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:22.965830  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:22.966718  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:22.987425  433674 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:22.988618  433674 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:22.988690  433674 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:23.134634  433674 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:52:22.429735  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.431237  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.923026  433439 pod_ready.go:81] duration metric: took 4m0.000804438s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	E0408 12:52:24.923079  433439 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:52:24.923103  433439 pod_ready.go:38] duration metric: took 4m6.498748448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:24.923143  433439 kubeadm.go:591] duration metric: took 4m14.484131334s to restartPrimaryControlPlane
	W0408 12:52:24.923222  433439 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:52:24.923260  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:52:29.641484  433674 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.505486 seconds
	I0408 12:52:29.659612  433674 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:52:29.683882  433674 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:52:30.237806  433674 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:52:30.238135  433674 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-488947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:52:30.755095  433674 kubeadm.go:309] [bootstrap-token] Using token: kwhj7g.e2hm9yupaxknooep
	I0408 12:52:30.756904  433674 out.go:204]   - Configuring RBAC rules ...
	I0408 12:52:30.757044  433674 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:52:30.763322  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:52:30.776489  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:52:30.780180  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:52:30.784949  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:52:30.789409  433674 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:52:30.810228  433674 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:52:31.071672  433674 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:52:31.180390  433674 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:52:31.180421  433674 kubeadm.go:309] 
	I0408 12:52:31.180493  433674 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:52:31.180504  433674 kubeadm.go:309] 
	I0408 12:52:31.180626  433674 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:52:31.180652  433674 kubeadm.go:309] 
	I0408 12:52:31.180682  433674 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:52:31.180758  433674 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:52:31.180823  433674 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:52:31.180835  433674 kubeadm.go:309] 
	I0408 12:52:31.180898  433674 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:52:31.180908  433674 kubeadm.go:309] 
	I0408 12:52:31.180967  433674 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:52:31.180978  433674 kubeadm.go:309] 
	I0408 12:52:31.181069  433674 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:52:31.181200  433674 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:52:31.181301  433674 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:52:31.181312  433674 kubeadm.go:309] 
	I0408 12:52:31.181446  433674 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:52:31.181564  433674 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:52:31.181577  433674 kubeadm.go:309] 
	I0408 12:52:31.181706  433674 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.181869  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:52:31.181923  433674 kubeadm.go:309] 	--control-plane 
	I0408 12:52:31.181933  433674 kubeadm.go:309] 
	I0408 12:52:31.182039  433674 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:52:31.182055  433674 kubeadm.go:309] 
	I0408 12:52:31.182167  433674 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.182323  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:52:31.182467  433674 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:52:31.182492  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:52:31.182502  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:52:31.184299  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:52:31.185716  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:52:31.217708  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:52:31.277627  433674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:52:31.277716  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:31.277740  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-488947 minikube.k8s.io/updated_at=2024_04_08T12_52_31_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=embed-certs-488947 minikube.k8s.io/primary=true
	I0408 12:52:31.591490  433674 ops.go:34] apiserver oom_adj: -16
	I0408 12:52:31.591651  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.092642  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.591845  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.092645  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.592585  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.092066  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.592232  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.091882  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.591794  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.091849  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.592616  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.091816  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.091756  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.592114  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.092524  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.591838  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.091853  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.591747  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.092421  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.592611  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.092369  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.092638  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.592549  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.091831  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.592358  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.799776  433674 kubeadm.go:1107] duration metric: took 13.522136387s to wait for elevateKubeSystemPrivileges
	W0408 12:52:44.799833  433674 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:52:44.799845  433674 kubeadm.go:393] duration metric: took 5m18.325910079s to StartCluster
	I0408 12:52:44.799870  433674 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.799981  433674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:52:44.802396  433674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.802704  433674 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:52:44.804525  433674 out.go:177] * Verifying Kubernetes components...
	I0408 12:52:44.802776  433674 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:52:44.802921  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:52:44.805724  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:52:44.805735  433674 addons.go:69] Setting metrics-server=true in profile "embed-certs-488947"
	I0408 12:52:44.805751  433674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-488947"
	I0408 12:52:44.805777  433674 addons.go:234] Setting addon metrics-server=true in "embed-certs-488947"
	W0408 12:52:44.805792  433674 addons.go:243] addon metrics-server should already be in state true
	I0408 12:52:44.805824  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805727  433674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-488947"
	I0408 12:52:44.805869  433674 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-488947"
	W0408 12:52:44.805883  433674 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:52:44.805915  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805834  433674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-488947"
	I0408 12:52:44.806260  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806262  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806266  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806286  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806288  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806326  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.824170  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I0408 12:52:44.824862  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.825517  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.825547  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.826049  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.826714  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.826752  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.827345  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0408 12:52:44.827569  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0408 12:52:44.828195  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828218  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828860  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.828892  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829023  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.829040  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829499  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829541  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829687  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.830201  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.830247  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.834128  433674 addons.go:234] Setting addon default-storageclass=true in "embed-certs-488947"
	W0408 12:52:44.834156  433674 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:52:44.834189  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.834569  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.834611  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.845829  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0408 12:52:44.846556  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.847545  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.847571  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.848210  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.848478  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.850407  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.850783  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0408 12:52:44.853144  433674 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:52:44.851322  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.854214  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0408 12:52:44.855198  433674 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:44.855222  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:52:44.855245  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.855434  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.855766  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855797  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.855936  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855956  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.856190  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856264  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856382  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.856937  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.856973  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.857994  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.859623  433674 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:52:44.860991  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:52:44.861012  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:52:44.858778  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.861032  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.861051  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.861072  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.859293  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.861282  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.861617  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.861817  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.863813  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864274  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.864299  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864483  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.864681  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.864846  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.865028  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.874355  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0408 12:52:44.874834  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.875388  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.875418  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.875775  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.875967  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.877519  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.877786  433674 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:44.877803  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:52:44.877818  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.880463  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.880846  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.880874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.881040  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.881234  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.881615  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.881753  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:45.057304  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:52:45.082702  433674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.091955  433674 node_ready.go:49] node "embed-certs-488947" has status "Ready":"True"
	I0408 12:52:45.091994  433674 node_ready.go:38] duration metric: took 9.246027ms for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.092007  433674 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:45.101654  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:45.237037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:52:45.237068  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:52:45.238421  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:45.274088  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:45.295037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:52:45.295078  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:52:45.397474  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:45.397504  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:52:45.431610  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:46.375681  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101541881s)
	I0408 12:52:46.375827  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.375862  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376204  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.376244  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.137166571s)
	I0408 12:52:46.376271  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.376291  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.376309  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376313  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376319  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.377184  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377205  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377613  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.377680  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377699  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377709  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.377747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.378168  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.378182  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.413325  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.413361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.413757  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.413780  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.679538  433674 pod_ready.go:92] pod "coredns-76f75df574-4gdp4" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.679577  433674 pod_ready.go:81] duration metric: took 1.577895468s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.679596  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760007  433674 pod_ready.go:92] pod "coredns-76f75df574-r5rxq" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.760043  433674 pod_ready.go:81] duration metric: took 80.437752ms for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760059  433674 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.803070  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.371401052s)
	I0408 12:52:46.803136  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803150  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803496  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803519  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803530  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803539  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803846  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803862  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803882  433674 addons.go:470] Verifying addon metrics-server=true in "embed-certs-488947"
	I0408 12:52:46.806034  433674 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0408 12:52:46.804164  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.807597  433674 pod_ready.go:81] duration metric: took 47.521367ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807622  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807621  433674 addons.go:505] duration metric: took 2.004847213s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0408 12:52:46.827049  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.827075  433674 pod_ready.go:81] duration metric: took 19.440746ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.827086  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848718  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.848759  433674 pod_ready.go:81] duration metric: took 21.664037ms for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848775  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087350  433674 pod_ready.go:92] pod "kube-proxy-mqrtp" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.087387  433674 pod_ready.go:81] duration metric: took 238.602902ms for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087403  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486822  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.486863  433674 pod_ready.go:81] duration metric: took 399.44977ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486875  433674 pod_ready.go:38] duration metric: took 2.394853452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:47.486895  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:52:47.486967  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:52:47.517426  433674 api_server.go:72] duration metric: took 2.714672176s to wait for apiserver process to appear ...
	I0408 12:52:47.517461  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:52:47.517492  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:52:47.527074  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:52:47.528230  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:52:47.528285  433674 api_server.go:131] duration metric: took 10.815426ms to wait for apiserver health ...
	I0408 12:52:47.528296  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:52:47.692054  433674 system_pods.go:59] 9 kube-system pods found
	I0408 12:52:47.692091  433674 system_pods.go:61] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:47.692096  433674 system_pods.go:61] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:47.692101  433674 system_pods.go:61] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:47.692105  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:47.692109  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:47.692112  433674 system_pods.go:61] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:47.692116  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:47.692123  433674 system_pods.go:61] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:47.692129  433674 system_pods.go:61] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:47.692137  433674 system_pods.go:74] duration metric: took 163.833038ms to wait for pod list to return data ...
	I0408 12:52:47.692146  433674 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:52:47.886668  433674 default_sa.go:45] found service account: "default"
	I0408 12:52:47.886695  433674 default_sa.go:55] duration metric: took 194.543392ms for default service account to be created ...
	I0408 12:52:47.886707  433674 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:52:48.090174  433674 system_pods.go:86] 9 kube-system pods found
	I0408 12:52:48.090212  433674 system_pods.go:89] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:48.090217  433674 system_pods.go:89] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:48.090222  433674 system_pods.go:89] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:48.090226  433674 system_pods.go:89] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:48.090232  433674 system_pods.go:89] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:48.090236  433674 system_pods.go:89] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:48.090240  433674 system_pods.go:89] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:48.090248  433674 system_pods.go:89] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:48.090253  433674 system_pods.go:89] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:48.090260  433674 system_pods.go:126] duration metric: took 203.547421ms to wait for k8s-apps to be running ...
	I0408 12:52:48.090266  433674 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:52:48.090312  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:48.106285  433674 system_svc.go:56] duration metric: took 15.998172ms WaitForService to wait for kubelet
	I0408 12:52:48.106322  433674 kubeadm.go:576] duration metric: took 3.303579521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:52:48.106345  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:52:48.287351  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:52:48.287381  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:52:48.287392  433674 node_conditions.go:105] duration metric: took 181.042972ms to run NodePressure ...
	I0408 12:52:48.287403  433674 start.go:240] waiting for startup goroutines ...
	I0408 12:52:48.287410  433674 start.go:245] waiting for cluster config update ...
	I0408 12:52:48.287419  433674 start.go:254] writing updated cluster config ...
	I0408 12:52:48.287738  433674 ssh_runner.go:195] Run: rm -f paused
	I0408 12:52:48.341532  433674 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:52:48.343890  433674 out.go:177] * Done! kubectl is now configured to use "embed-certs-488947" cluster and "default" namespace by default
	I0408 12:52:57.475303  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.552015668s)
	I0408 12:52:57.475390  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:57.492800  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:57.507211  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:57.520174  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:57.520203  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:57.520267  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:52:57.531854  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:57.531939  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:57.543764  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:52:57.555407  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:57.555479  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:57.569452  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.580478  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:57.580575  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.591819  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:52:57.602496  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:57.602589  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:57.613811  433439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:57.669998  433439 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:57.670137  433439 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:57.830674  433439 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:57.830802  433439 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:57.830882  433439 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:58.090382  433439 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:58.092626  433439 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:58.092733  433439 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:58.092809  433439 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:58.092906  433439 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:58.093027  433439 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:58.093130  433439 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:58.093202  433439 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:58.093547  433439 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:58.093941  433439 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:58.094342  433439 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:58.094708  433439 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:58.095077  433439 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:58.095159  433439 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:58.328890  433439 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:58.516475  433439 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:58.830765  433439 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:59.052737  433439 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:59.306668  433439 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:59.307305  433439 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:59.312102  433439 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:59.314983  433439 out.go:204]   - Booting up control plane ...
	I0408 12:52:59.315104  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:59.315191  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:59.315305  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:59.334624  433439 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:59.335637  433439 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:59.335713  433439 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:59.486408  433439 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:05.490227  433439 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002996 seconds
	I0408 12:53:05.526221  433439 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:53:05.553758  433439 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:53:06.101116  433439 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:53:06.101340  433439 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-527454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:53:06.616939  433439 kubeadm.go:309] [bootstrap-token] Using token: oe56hb.uz3a0dd96vnry1w3
	I0408 12:53:06.618840  433439 out.go:204]   - Configuring RBAC rules ...
	I0408 12:53:06.619038  433439 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:53:06.625364  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:53:06.638696  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:53:06.643811  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:53:06.647895  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:53:06.651857  433439 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:53:06.677056  433439 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:53:06.939588  433439 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:53:07.038633  433439 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:53:07.041464  433439 kubeadm.go:309] 
	I0408 12:53:07.041565  433439 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:53:07.041578  433439 kubeadm.go:309] 
	I0408 12:53:07.041680  433439 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:53:07.041699  433439 kubeadm.go:309] 
	I0408 12:53:07.041723  433439 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:53:07.041824  433439 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:53:07.041906  433439 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:53:07.041917  433439 kubeadm.go:309] 
	I0408 12:53:07.041988  433439 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:53:07.041998  433439 kubeadm.go:309] 
	I0408 12:53:07.042103  433439 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:53:07.042123  433439 kubeadm.go:309] 
	I0408 12:53:07.042168  433439 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:53:07.042253  433439 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:53:07.042351  433439 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:53:07.042361  433439 kubeadm.go:309] 
	I0408 12:53:07.042588  433439 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:53:07.042708  433439 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:53:07.042719  433439 kubeadm.go:309] 
	I0408 12:53:07.042823  433439 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.042959  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:53:07.042994  433439 kubeadm.go:309] 	--control-plane 
	I0408 12:53:07.043003  433439 kubeadm.go:309] 
	I0408 12:53:07.043127  433439 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:53:07.043143  433439 kubeadm.go:309] 
	I0408 12:53:07.043253  433439 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.043400  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:53:07.043583  433439 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:53:07.043608  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:53:07.043620  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:53:07.045283  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:53:07.046614  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:53:07.074907  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:53:07.107168  433439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:53:07.107232  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.107256  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-527454 minikube.k8s.io/updated_at=2024_04_08T12_53_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=default-k8s-diff-port-527454 minikube.k8s.io/primary=true
	I0408 12:53:07.208551  433439 ops.go:34] apiserver oom_adj: -16
	I0408 12:53:07.395206  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.896090  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.396097  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.896240  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.395654  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.895751  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.396242  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.896204  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.395766  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.895555  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.396014  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.896092  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.395507  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.895832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.395237  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.895333  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.396191  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.895561  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.395832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.895785  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.395460  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.895320  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.395826  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.896002  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.396326  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.514796  433439 kubeadm.go:1107] duration metric: took 12.407623504s to wait for elevateKubeSystemPrivileges
	W0408 12:53:19.514843  433439 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:53:19.514856  433439 kubeadm.go:393] duration metric: took 5m9.134867072s to StartCluster
	I0408 12:53:19.514882  433439 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.514981  433439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:53:19.516708  433439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.516988  433439 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:53:19.518597  433439 out.go:177] * Verifying Kubernetes components...
	I0408 12:53:19.517057  433439 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:53:19.517238  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:53:19.519990  433439 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520011  433439 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:19.520003  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0408 12:53:19.520052  433439 addons.go:243] addon metrics-server should already be in state true
	I0408 12:53:19.520095  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.519995  433439 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520161  433439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.520247  433439 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:53:19.520274  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.520519  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520521  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520555  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520616  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520639  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520556  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.536637  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0408 12:53:19.536896  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0408 12:53:19.536997  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0408 12:53:19.537194  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537369  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537453  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537748  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537772  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.537883  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537895  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538210  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538262  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538352  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.538372  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538815  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.538818  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538875  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.539030  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.542211  433439 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.542228  433439 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:53:19.542252  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.542841  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.542871  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.556920  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0408 12:53:19.557552  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0408 12:53:19.557712  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.557930  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.558468  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.558482  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.559174  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.559474  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.559852  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.559881  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.560358  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.561323  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.561357  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.561606  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.563808  433439 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:53:19.565205  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:53:19.565224  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:53:19.565252  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.565914  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0408 12:53:19.566710  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.567503  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.567521  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.568270  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.568656  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.568664  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.569109  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.569136  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.569294  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.569451  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.569707  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.569894  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.570455  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.572243  433439 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:53:19.573764  433439 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:19.573784  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:53:19.573804  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.576844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577310  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.577380  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577547  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.577851  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.578009  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.578154  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.579402  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0408 12:53:19.579860  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.580428  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.580448  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.581001  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.581202  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.582638  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.582913  433439 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:19.582929  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:53:19.582949  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.585995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586456  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.586488  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586665  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.586845  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.586974  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.587077  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.782606  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:53:19.822413  433439 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833467  433439 node_ready.go:49] node "default-k8s-diff-port-527454" has status "Ready":"True"
	I0408 12:53:19.833493  433439 node_ready.go:38] duration metric: took 11.040127ms for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833503  433439 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:19.845052  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:19.990826  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:20.027800  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:53:20.027827  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:53:20.066661  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:20.168240  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:53:20.168271  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:53:20.327307  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.327336  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:53:20.390128  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.455235  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455265  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455575  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455607  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.455618  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455628  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455912  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455929  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.494751  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.494778  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.495103  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.495126  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.495132  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.454862  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.388156991s)
	I0408 12:53:21.454942  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.454956  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455313  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.455368  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455377  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455386  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.455395  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455729  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455753  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455797  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.591677  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.201496165s)
	I0408 12:53:21.591745  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.591760  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.592145  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592183  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592199  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.592214  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592484  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592501  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592513  433439 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:21.594462  433439 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0408 12:53:21.595731  433439 addons.go:505] duration metric: took 2.078676652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0408 12:53:21.852741  433439 pod_ready.go:102] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"False"
	I0408 12:53:22.375241  433439 pod_ready.go:92] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.375283  433439 pod_ready.go:81] duration metric: took 2.53020032s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.375298  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.391968  433439 pod_ready.go:92] pod "coredns-76f75df574-z56lf" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.392003  433439 pod_ready.go:81] duration metric: took 16.695581ms for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.392018  433439 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398659  433439 pod_ready.go:92] pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.398688  433439 pod_ready.go:81] duration metric: took 6.657546ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398699  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407214  433439 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.407241  433439 pod_ready.go:81] duration metric: took 8.535246ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407252  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416605  433439 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.416632  433439 pod_ready.go:81] duration metric: took 9.374648ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416644  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750191  433439 pod_ready.go:92] pod "kube-proxy-tlhff" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.750220  433439 pod_ready.go:81] duration metric: took 333.570363ms for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750231  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.148980  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:23.149009  433439 pod_ready.go:81] duration metric: took 398.771226ms for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.149018  433439 pod_ready.go:38] duration metric: took 3.315505787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:23.149034  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:53:23.149087  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:53:23.165120  433439 api_server.go:72] duration metric: took 3.648094543s to wait for apiserver process to appear ...
	I0408 12:53:23.165149  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:53:23.165170  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:53:23.171016  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:53:23.172486  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:53:23.172510  433439 api_server.go:131] duration metric: took 7.354957ms to wait for apiserver health ...
	I0408 12:53:23.172518  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:53:23.353807  433439 system_pods.go:59] 9 kube-system pods found
	I0408 12:53:23.353846  433439 system_pods.go:61] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.353853  433439 system_pods.go:61] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.353859  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.353866  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.353874  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.353879  433439 system_pods.go:61] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.353883  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.353890  433439 system_pods.go:61] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.353896  433439 system_pods.go:61] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.353911  433439 system_pods.go:74] duration metric: took 181.386053ms to wait for pod list to return data ...
	I0408 12:53:23.353923  433439 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:53:23.549663  433439 default_sa.go:45] found service account: "default"
	I0408 12:53:23.549702  433439 default_sa.go:55] duration metric: took 195.766529ms for default service account to be created ...
	I0408 12:53:23.549717  433439 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:53:23.755668  433439 system_pods.go:86] 9 kube-system pods found
	I0408 12:53:23.755729  433439 system_pods.go:89] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.755739  433439 system_pods.go:89] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.755748  433439 system_pods.go:89] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.755755  433439 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.755761  433439 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.755768  433439 system_pods.go:89] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.755774  433439 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.755787  433439 system_pods.go:89] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.755792  433439 system_pods.go:89] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.755805  433439 system_pods.go:126] duration metric: took 206.081481ms to wait for k8s-apps to be running ...
	I0408 12:53:23.755814  433439 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:53:23.755866  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:23.774910  433439 system_svc.go:56] duration metric: took 19.080727ms WaitForService to wait for kubelet
	I0408 12:53:23.774954  433439 kubeadm.go:576] duration metric: took 4.257931558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:53:23.774985  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:53:23.949588  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:53:23.949618  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:53:23.949630  433439 node_conditions.go:105] duration metric: took 174.638826ms to run NodePressure ...
	I0408 12:53:23.949642  433439 start.go:240] waiting for startup goroutines ...
	I0408 12:53:23.949649  433439 start.go:245] waiting for cluster config update ...
	I0408 12:53:23.949659  433439 start.go:254] writing updated cluster config ...
	I0408 12:53:23.949929  433439 ssh_runner.go:195] Run: rm -f paused
	I0408 12:53:24.004633  433439 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:53:24.007640  433439 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-527454" cluster and "default" namespace by default
	I0408 12:53:50.506496  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:53:50.506736  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:53:50.508871  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:50.508975  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:50.509090  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:50.509248  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:50.509435  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:50.509546  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:50.511505  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:50.511616  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:50.511727  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:50.511838  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:50.511925  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:50.512024  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:50.512112  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:50.512228  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:50.512332  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:50.512442  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:50.512551  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:50.512608  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:50.512661  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:50.512714  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:50.512784  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:50.512866  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:50.512934  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:50.513078  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:50.513228  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:50.513285  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:50.513383  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:50.515207  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:50.515297  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:50.515380  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:50.515449  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:50.515522  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:50.515668  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:50.515756  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:53:50.515843  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516036  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516118  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516346  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516428  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516675  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516747  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516990  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517092  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.517336  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517352  433881 kubeadm.go:309] 
	I0408 12:53:50.517402  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:53:50.517453  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:53:50.517463  433881 kubeadm.go:309] 
	I0408 12:53:50.517517  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:53:50.517572  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:53:50.517743  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:53:50.517757  433881 kubeadm.go:309] 
	I0408 12:53:50.517898  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:53:50.517949  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:53:50.517999  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:53:50.518014  433881 kubeadm.go:309] 
	I0408 12:53:50.518163  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:53:50.518286  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:53:50.518297  433881 kubeadm.go:309] 
	I0408 12:53:50.518448  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:53:50.518581  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:53:50.518686  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:53:50.518747  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:53:50.518781  433881 kubeadm.go:309] 
	W0408 12:53:50.518884  433881 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:53:50.518933  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:53:50.995302  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:51.011982  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:53:51.022491  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:53:51.022512  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:53:51.022565  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:53:51.032994  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:53:51.033071  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:53:51.043529  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:53:51.053500  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:53:51.053580  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:53:51.063658  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.073397  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:53:51.073464  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.085243  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:53:51.095094  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:53:51.095165  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:53:51.105549  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:53:51.185596  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:51.185706  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:51.349502  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:51.349661  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:51.349805  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:51.557584  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:51.559567  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:51.559701  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:51.559800  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:51.559968  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:51.560065  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:51.560159  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:51.560241  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:51.560337  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:51.560443  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:51.560561  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:51.560680  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:51.560735  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:51.560826  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:51.727630  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:51.895665  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:52.087304  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:52.187789  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:52.213627  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:52.213777  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:52.213837  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:52.384599  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:52.386843  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:52.386992  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:52.389989  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:52.393527  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:52.394471  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:52.405071  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:54:32.408240  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:54:32.408440  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:32.408738  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:37.409255  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:37.409493  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:47.409946  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:47.410234  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:07.410503  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:07.410710  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.409536  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:47.410032  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.410062  433881 kubeadm.go:309] 
	I0408 12:55:47.410118  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:55:47.410216  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:55:47.410232  433881 kubeadm.go:309] 
	I0408 12:55:47.410278  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:55:47.410341  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:55:47.410503  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:55:47.410515  433881 kubeadm.go:309] 
	I0408 12:55:47.410691  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:55:47.410768  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:55:47.410833  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:55:47.410843  433881 kubeadm.go:309] 
	I0408 12:55:47.411002  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:55:47.411092  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:55:47.411099  433881 kubeadm.go:309] 
	I0408 12:55:47.411208  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:55:47.411325  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:55:47.411415  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:55:47.411523  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:55:47.411534  433881 kubeadm.go:309] 
	I0408 12:55:47.413655  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:55:47.413779  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:55:47.413887  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:55:47.414099  433881 kubeadm.go:393] duration metric: took 7m58.347147979s to StartCluster
	I0408 12:55:47.414206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:55:47.414540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:55:47.466864  433881 cri.go:89] found id: ""
	I0408 12:55:47.466899  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.466909  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:55:47.466917  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:55:47.466999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:55:47.505562  433881 cri.go:89] found id: ""
	I0408 12:55:47.505590  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.505599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:55:47.505606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:55:47.505663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:55:47.545030  433881 cri.go:89] found id: ""
	I0408 12:55:47.545063  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.545075  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:55:47.545086  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:55:47.545158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:55:47.584650  433881 cri.go:89] found id: ""
	I0408 12:55:47.584685  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.584698  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:55:47.584707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:55:47.584775  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:55:47.624857  433881 cri.go:89] found id: ""
	I0408 12:55:47.624885  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.624893  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:55:47.624900  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:55:47.624953  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:55:47.662872  433881 cri.go:89] found id: ""
	I0408 12:55:47.662910  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.662922  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:55:47.662931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:55:47.662999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:55:47.702086  433881 cri.go:89] found id: ""
	I0408 12:55:47.702132  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.702142  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:55:47.702148  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:55:47.702198  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:55:47.754880  433881 cri.go:89] found id: ""
	I0408 12:55:47.754912  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.754922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:55:47.754932  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:55:47.754946  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:55:47.839768  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:55:47.839800  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:55:47.839817  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:55:47.947231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:55:47.947281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:55:47.997692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:55:47.997725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:55:48.050804  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:55:48.050854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0408 12:55:48.067168  433881 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:55:48.067218  433881 out.go:239] * 
	W0408 12:55:48.067277  433881 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.067305  433881 out.go:239] * 
	W0408 12:55:48.068281  433881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:55:48.072609  433881 out.go:177] 
	W0408 12:55:48.074039  433881 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.074112  433881 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:55:48.074174  433881 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:55:48.076570  433881 out.go:177] 
	
	
	==> CRI-O <==
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.164305214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581346164278853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8315439-0c60-4a44-9915-ff065ababd57 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.164766242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd8e4dda-8a60-4b37-a104-c11034167764 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.164819098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd8e4dda-8a60-4b37-a104-c11034167764 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.165075756Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a,PodSandboxId:5d816e024dad921c8c79784a9ff5eec2403a8cb531cd861641ed8e5fcaf9e9a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580801921399033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040c7a58-258b-4798-8fae-7dc42ce50cac,},Annotations:map[string]string{io.kubernetes.container.hash: 3f42524f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a,PodSandboxId:5ce1be244803e2de952d451a8dd9705fffa60830b33a90c83d1ad424096ba2be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580801043850399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-z56lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132d7297-7ba4-4f7f-bef8-66c67b4ef8f2,},Annotations:map[string]string{io.kubernetes.container.hash: f4b906a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c,PodSandboxId:3c5961baf1a9741f0fcb84160be9e2ddfe8bbbed8b188a362017b2c472e86205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580800913808760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7v2jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0ff09cb3-5ab5-4c6c-96cb-473f3473b06d,},Annotations:map[string]string{io.kubernetes.container.hash: 64a1eda7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50,PodSandboxId:70a5bbfac971a7fce41e7e19159ef3ab47221ce4892b302477bbc2272bd31d9d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1712580799873363941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6365e5a8-345d-4e77-988c-1dcab7b21065,},Annotations:map[string]string{io.kubernetes.container.hash: 1e2277ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887,PodSandboxId:21397eb0eeec3d5beec065178ccd5988a2ea2da97bdd682ad8da9b0abe7af919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580780556149513,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1e19f40a59ac70091be4a0e88cdcf9,},Annotations:map[string]string{io.kubernetes.container.hash: b041b19f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839,PodSandboxId:b32f80755ee4291a73b185b2727b08aa51ab0f6c5e3af7f132a4345a6444c698,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580780479261310,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bd65f702d1e6dfe203c48b0c45374b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc,PodSandboxId:442bb75a728c9ac96bd044f2f31293d53b7f33db0bb4ca16389f1d1189e3e088,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580780468865325,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23950c6d0658066d1ce2f95af947c062,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507,PodSandboxId:c68fa3593ad7d7d5f4cb56a6c1df0ba2bb863038f9e15ca0f556863d56e4ce09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580780467246417,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9906d932be6b4ffd22314e897eab29,},Annotations:map[string]string{io.kubernetes.container.hash: 2509e5b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd8e4dda-8a60-4b37-a104-c11034167764 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.210036012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5bba7e1-a2b5-4295-b47c-a606a4c51da3 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.210132488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5bba7e1-a2b5-4295-b47c-a606a4c51da3 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.212390391Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b205937-18f2-49cd-be87-5adcbb340f87 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.213231222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581346213203744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b205937-18f2-49cd-be87-5adcbb340f87 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.214140198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3d31096-882e-46f5-8bc3-af31eb44e8cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.214242794Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3d31096-882e-46f5-8bc3-af31eb44e8cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.214512722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a,PodSandboxId:5d816e024dad921c8c79784a9ff5eec2403a8cb531cd861641ed8e5fcaf9e9a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580801921399033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040c7a58-258b-4798-8fae-7dc42ce50cac,},Annotations:map[string]string{io.kubernetes.container.hash: 3f42524f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a,PodSandboxId:5ce1be244803e2de952d451a8dd9705fffa60830b33a90c83d1ad424096ba2be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580801043850399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-z56lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132d7297-7ba4-4f7f-bef8-66c67b4ef8f2,},Annotations:map[string]string{io.kubernetes.container.hash: f4b906a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c,PodSandboxId:3c5961baf1a9741f0fcb84160be9e2ddfe8bbbed8b188a362017b2c472e86205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580800913808760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7v2jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0ff09cb3-5ab5-4c6c-96cb-473f3473b06d,},Annotations:map[string]string{io.kubernetes.container.hash: 64a1eda7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50,PodSandboxId:70a5bbfac971a7fce41e7e19159ef3ab47221ce4892b302477bbc2272bd31d9d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1712580799873363941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6365e5a8-345d-4e77-988c-1dcab7b21065,},Annotations:map[string]string{io.kubernetes.container.hash: 1e2277ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887,PodSandboxId:21397eb0eeec3d5beec065178ccd5988a2ea2da97bdd682ad8da9b0abe7af919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580780556149513,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1e19f40a59ac70091be4a0e88cdcf9,},Annotations:map[string]string{io.kubernetes.container.hash: b041b19f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839,PodSandboxId:b32f80755ee4291a73b185b2727b08aa51ab0f6c5e3af7f132a4345a6444c698,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580780479261310,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bd65f702d1e6dfe203c48b0c45374b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc,PodSandboxId:442bb75a728c9ac96bd044f2f31293d53b7f33db0bb4ca16389f1d1189e3e088,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580780468865325,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23950c6d0658066d1ce2f95af947c062,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507,PodSandboxId:c68fa3593ad7d7d5f4cb56a6c1df0ba2bb863038f9e15ca0f556863d56e4ce09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580780467246417,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9906d932be6b4ffd22314e897eab29,},Annotations:map[string]string{io.kubernetes.container.hash: 2509e5b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3d31096-882e-46f5-8bc3-af31eb44e8cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.260521419Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c530789b-b692-4b74-bdf6-fdeca0142294 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.260593663Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c530789b-b692-4b74-bdf6-fdeca0142294 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.262138122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6280091c-2eb4-49d7-9fc7-6deaa6035d41 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.262607130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581346262582289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6280091c-2eb4-49d7-9fc7-6deaa6035d41 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.263292797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9415c04f-60ad-491f-929f-ed14fa2f98a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.263369543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9415c04f-60ad-491f-929f-ed14fa2f98a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.263547799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a,PodSandboxId:5d816e024dad921c8c79784a9ff5eec2403a8cb531cd861641ed8e5fcaf9e9a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580801921399033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040c7a58-258b-4798-8fae-7dc42ce50cac,},Annotations:map[string]string{io.kubernetes.container.hash: 3f42524f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a,PodSandboxId:5ce1be244803e2de952d451a8dd9705fffa60830b33a90c83d1ad424096ba2be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580801043850399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-z56lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132d7297-7ba4-4f7f-bef8-66c67b4ef8f2,},Annotations:map[string]string{io.kubernetes.container.hash: f4b906a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c,PodSandboxId:3c5961baf1a9741f0fcb84160be9e2ddfe8bbbed8b188a362017b2c472e86205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580800913808760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7v2jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0ff09cb3-5ab5-4c6c-96cb-473f3473b06d,},Annotations:map[string]string{io.kubernetes.container.hash: 64a1eda7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50,PodSandboxId:70a5bbfac971a7fce41e7e19159ef3ab47221ce4892b302477bbc2272bd31d9d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1712580799873363941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6365e5a8-345d-4e77-988c-1dcab7b21065,},Annotations:map[string]string{io.kubernetes.container.hash: 1e2277ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887,PodSandboxId:21397eb0eeec3d5beec065178ccd5988a2ea2da97bdd682ad8da9b0abe7af919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580780556149513,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1e19f40a59ac70091be4a0e88cdcf9,},Annotations:map[string]string{io.kubernetes.container.hash: b041b19f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839,PodSandboxId:b32f80755ee4291a73b185b2727b08aa51ab0f6c5e3af7f132a4345a6444c698,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580780479261310,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bd65f702d1e6dfe203c48b0c45374b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc,PodSandboxId:442bb75a728c9ac96bd044f2f31293d53b7f33db0bb4ca16389f1d1189e3e088,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580780468865325,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23950c6d0658066d1ce2f95af947c062,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507,PodSandboxId:c68fa3593ad7d7d5f4cb56a6c1df0ba2bb863038f9e15ca0f556863d56e4ce09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580780467246417,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9906d932be6b4ffd22314e897eab29,},Annotations:map[string]string{io.kubernetes.container.hash: 2509e5b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9415c04f-60ad-491f-929f-ed14fa2f98a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.303685423Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8428778b-fda4-47cb-995f-9c71434326b6 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.303886095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8428778b-fda4-47cb-995f-9c71434326b6 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.305503755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c38fc9b0-eaa8-40bd-a21a-34e017f718d1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.306290598Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581346306122340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c38fc9b0-eaa8-40bd-a21a-34e017f718d1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.307052745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c2b1c06-37a9-4251-b854-1db6d5e4945d name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.307146078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c2b1c06-37a9-4251-b854-1db6d5e4945d name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:02:26 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:02:26.307481677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a,PodSandboxId:5d816e024dad921c8c79784a9ff5eec2403a8cb531cd861641ed8e5fcaf9e9a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580801921399033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040c7a58-258b-4798-8fae-7dc42ce50cac,},Annotations:map[string]string{io.kubernetes.container.hash: 3f42524f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a,PodSandboxId:5ce1be244803e2de952d451a8dd9705fffa60830b33a90c83d1ad424096ba2be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580801043850399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-z56lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132d7297-7ba4-4f7f-bef8-66c67b4ef8f2,},Annotations:map[string]string{io.kubernetes.container.hash: f4b906a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c,PodSandboxId:3c5961baf1a9741f0fcb84160be9e2ddfe8bbbed8b188a362017b2c472e86205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580800913808760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7v2jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0ff09cb3-5ab5-4c6c-96cb-473f3473b06d,},Annotations:map[string]string{io.kubernetes.container.hash: 64a1eda7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50,PodSandboxId:70a5bbfac971a7fce41e7e19159ef3ab47221ce4892b302477bbc2272bd31d9d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1712580799873363941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6365e5a8-345d-4e77-988c-1dcab7b21065,},Annotations:map[string]string{io.kubernetes.container.hash: 1e2277ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887,PodSandboxId:21397eb0eeec3d5beec065178ccd5988a2ea2da97bdd682ad8da9b0abe7af919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580780556149513,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1e19f40a59ac70091be4a0e88cdcf9,},Annotations:map[string]string{io.kubernetes.container.hash: b041b19f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839,PodSandboxId:b32f80755ee4291a73b185b2727b08aa51ab0f6c5e3af7f132a4345a6444c698,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580780479261310,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bd65f702d1e6dfe203c48b0c45374b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc,PodSandboxId:442bb75a728c9ac96bd044f2f31293d53b7f33db0bb4ca16389f1d1189e3e088,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580780468865325,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23950c6d0658066d1ce2f95af947c062,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507,PodSandboxId:c68fa3593ad7d7d5f4cb56a6c1df0ba2bb863038f9e15ca0f556863d56e4ce09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580780467246417,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9906d932be6b4ffd22314e897eab29,},Annotations:map[string]string{io.kubernetes.container.hash: 2509e5b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c2b1c06-37a9-4251-b854-1db6d5e4945d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ab9b8ad3dd0b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5d816e024dad9       storage-provisioner
	152b1f090e632       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   5ce1be244803e       coredns-76f75df574-z56lf
	d5be5b73f4749       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3c5961baf1a97       coredns-76f75df574-7v2jc
	7674f0c7c9a53       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   70a5bbfac971a       kube-proxy-tlhff
	45beb8e8d0672       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   21397eb0eeec3       etcd-default-k8s-diff-port-527454
	e63266466add0       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   b32f80755ee42       kube-scheduler-default-k8s-diff-port-527454
	e76cf4cd181c5       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   442bb75a728c9       kube-controller-manager-default-k8s-diff-port-527454
	0b459dab2129e       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   c68fa3593ad7d       kube-apiserver-default-k8s-diff-port-527454
	
	
	==> coredns [152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-527454
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-527454
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=default-k8s-diff-port-527454
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T12_53_07_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:53:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-527454
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 13:02:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 12:58:34 +0000   Mon, 08 Apr 2024 12:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 12:58:34 +0000   Mon, 08 Apr 2024 12:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 12:58:34 +0000   Mon, 08 Apr 2024 12:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 12:58:34 +0000   Mon, 08 Apr 2024 12:53:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    default-k8s-diff-port-527454
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 70aaa6404194440f92b37b0f9932f978
	  System UUID:                70aaa640-4194-440f-92b3-7b0f9932f978
	  Boot ID:                    400b1cc2-5095-46c2-bd20-ea1e3e6c2916
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-7v2jc                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-76f75df574-z56lf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-527454                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-527454             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-527454    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-tlhff                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-527454             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-jqbmw                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node default-k8s-diff-port-527454 event: Registered Node default-k8s-diff-port-527454 in Controller
	
	
	==> dmesg <==
	[  +0.054689] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044318] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.063005] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.088542] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.699124] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 8 12:48] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.061519] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071382] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.188772] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.158188] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.332966] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.884855] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.068096] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.366243] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.679324] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.358293] kauditd_printk_skb: 79 callbacks suppressed
	[Apr 8 12:52] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.331574] systemd-fstab-generator[3576]: Ignoring "noauto" option for root device
	[Apr 8 12:53] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.875931] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[ +12.897299] systemd-fstab-generator[4088]: Ignoring "noauto" option for root device
	[  +0.141509] kauditd_printk_skb: 14 callbacks suppressed
	[Apr 8 12:54] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887] <==
	{"level":"info","ts":"2024-04-08T12:53:00.911738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c switched to configuration voters=(9613909553285501196)"}
	{"level":"info","ts":"2024-04-08T12:53:00.912711Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","added-peer-id":"856b77cd5251110c","added-peer-peer-urls":["https://192.168.50.7:2380"]}
	{"level":"info","ts":"2024-04-08T12:53:00.913888Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-08T12:53:00.914212Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"856b77cd5251110c","initial-advertise-peer-urls":["https://192.168.50.7:2380"],"listen-peer-urls":["https://192.168.50.7:2380"],"advertise-client-urls":["https://192.168.50.7:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.7:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-08T12:53:00.914269Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-08T12:53:00.914356Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-04-08T12:53:00.914364Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-04-08T12:53:01.377983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-08T12:53:01.37804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-08T12:53:01.378073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgPreVoteResp from 856b77cd5251110c at term 1"}
	{"level":"info","ts":"2024-04-08T12:53:01.37809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became candidate at term 2"}
	{"level":"info","ts":"2024-04-08T12:53:01.378099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-04-08T12:53:01.378107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 2"}
	{"level":"info","ts":"2024-04-08T12:53:01.378115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-04-08T12:53:01.382206Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:default-k8s-diff-port-527454 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T12:53:01.382377Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:53:01.382527Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:53:01.388312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:53:01.399987Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:53:01.400044Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T12:53:01.400105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:53:01.400194Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:53:01.400236Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:53:01.400596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	{"level":"info","ts":"2024-04-08T12:53:01.406725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:02:26 up 14 min,  0 users,  load average: 0.02, 0.10, 0.11
	Linux default-k8s-diff-port-527454 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507] <==
	I0408 12:56:22.348536       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:58:03.447497       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:58:03.447625       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0408 12:58:04.448722       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:58:04.448786       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 12:58:04.448800       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:58:04.448979       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:58:04.449094       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 12:58:04.450173       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:59:04.449796       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:59:04.449892       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 12:59:04.449952       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 12:59:04.451194       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 12:59:04.451267       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 12:59:04.451274       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:01:04.451059       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:01:04.451151       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:01:04.451160       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:01:04.452585       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:01:04.452747       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:01:04.452777       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc] <==
	I0408 12:56:49.126382       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:57:18.669664       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:57:19.135668       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:57:48.676101       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:57:49.145471       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:58:18.682713       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:58:19.153742       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:58:48.688388       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:58:49.163222       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 12:59:18.694017       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:59:19.172209       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 12:59:27.086706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="402.611µs"
	I0408 12:59:41.087613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="173.894µs"
	E0408 12:59:48.700721       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 12:59:49.181552       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:00:18.707560       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:00:19.190732       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:00:48.713137       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:00:49.202685       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:01:18.718706       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:01:19.211243       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:01:48.725403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:01:49.220741       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:02:18.731738       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:02:19.229112       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50] <==
	I0408 12:53:20.269360       1 server_others.go:72] "Using iptables proxy"
	I0408 12:53:20.288748       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.7"]
	I0408 12:53:20.378273       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 12:53:20.378357       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:53:20.378375       1 server_others.go:168] "Using iptables Proxier"
	I0408 12:53:20.383185       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:53:20.383467       1 server.go:865] "Version info" version="v1.29.3"
	I0408 12:53:20.383498       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:53:20.384811       1 config.go:188] "Starting service config controller"
	I0408 12:53:20.384874       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 12:53:20.385032       1 config.go:97] "Starting endpoint slice config controller"
	I0408 12:53:20.385039       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 12:53:20.385838       1 config.go:315] "Starting node config controller"
	I0408 12:53:20.385846       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 12:53:20.490369       1 shared_informer.go:318] Caches are synced for node config
	I0408 12:53:20.490418       1 shared_informer.go:318] Caches are synced for service config
	I0408 12:53:20.490447       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839] <==
	W0408 12:53:04.297261       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 12:53:04.297371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 12:53:04.316105       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 12:53:04.316199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 12:53:04.431485       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 12:53:04.431601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 12:53:04.662189       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 12:53:04.663749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 12:53:04.666406       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 12:53:04.666489       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:53:04.683290       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 12:53:04.683347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 12:53:04.685178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 12:53:04.685222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 12:53:04.685382       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 12:53:04.686131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 12:53:04.715670       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 12:53:04.715722       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 12:53:04.738353       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 12:53:04.738400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 12:53:04.752529       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 12:53:04.752585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 12:53:04.796339       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 12:53:04.796397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0408 12:53:07.749218       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 13:00:07 default-k8s-diff-port-527454 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:00:07 default-k8s-diff-port-527454 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:00:07 default-k8s-diff-port-527454 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:00:07 default-k8s-diff-port-527454 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:00:11 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:00:11.064580    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:00:25 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:00:25.062578    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:00:38 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:00:38.063307    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:00:51 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:00:51.064131    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:01:05 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:01:05.062605    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:01:07 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:01:07.131832    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:01:07 default-k8s-diff-port-527454 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:01:07 default-k8s-diff-port-527454 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:01:07 default-k8s-diff-port-527454 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:01:07 default-k8s-diff-port-527454 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:01:16 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:01:16.062685    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:01:29 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:01:29.063710    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:01:42 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:01:42.062758    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:01:54 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:01:54.063582    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:02:06 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:02:06.063339    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:02:07 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:02:07.129574    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:02:07 default-k8s-diff-port-527454 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:02:07 default-k8s-diff-port-527454 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:02:07 default-k8s-diff-port-527454 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:02:07 default-k8s-diff-port-527454 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:02:20 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:02:20.063517    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	
	
	==> storage-provisioner [ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a] <==
	I0408 12:53:22.050163       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 12:53:22.060236       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 12:53:22.060350       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 12:53:22.084140       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 12:53:22.084354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-527454_877718d7-2eb4-4d19-a7da-a516b03067da!
	I0408 12:53:22.085661       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7757a94-2e1b-45e7-907e-77fa413779b0", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-527454_877718d7-2eb4-4d19-a7da-a516b03067da became leader
	I0408 12:53:22.184872       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-527454_877718d7-2eb4-4d19-a7da-a516b03067da!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-527454 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-jqbmw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-527454 describe pod metrics-server-57f55c9bc5-jqbmw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-527454 describe pod metrics-server-57f55c9bc5-jqbmw: exit status 1 (66.430816ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-jqbmw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-527454 describe pod metrics-server-57f55c9bc5-jqbmw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:56:11.829074  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:56:24.010291  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:56:47.590607  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:56:52.701506  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:57:34.873758  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:57:41.604389  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:57:47.054939  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:58:06.832896  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:58:22.065922  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:58:31.891711  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:58:44.543289  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:59:04.651128  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 12:59:13.957644  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:00:29.655040  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:01:09.880920  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:01:11.828600  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:01:24.010088  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:02:41.603968  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:03:06.832855  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:03:22.066102  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:03:31.892557  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:03:44.543039  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:04:13.958108  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 2 (261.794996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-384148" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 2 (248.970751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-384148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-384148 logs -n 25: (1.660327021s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo cat                                               |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo containerd config dump                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl status crio                             |                              |         |                |                     |                     |
	|         | --all --full --no-pager                                |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl cat crio                                |                              |         |                |                     |                     |
	|         | --no-pager                                             |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |                |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |                |                     |                     |
	|         | \;                                                     |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo crio config                                       |                              |         |                |                     |                     |
	| delete  | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-122490 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | disable-driver-mounts-122490                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:42:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:42:31.610028  433881 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:42:31.610291  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610300  433881 out.go:304] Setting ErrFile to fd 2...
	I0408 12:42:31.610304  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610590  433881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:42:31.611834  433881 out.go:298] Setting JSON to false
	I0408 12:42:31.613323  433881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8695,"bootTime":1712571457,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:42:31.613413  433881 start.go:139] virtualization: kvm guest
	I0408 12:42:31.615441  433881 out.go:177] * [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:42:31.617429  433881 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:42:31.617459  433881 notify.go:220] Checking for updates...
	I0408 12:42:31.618918  433881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:42:31.620434  433881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:42:31.621883  433881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:42:31.623381  433881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:42:31.624858  433881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:42:31.626731  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:42:31.627141  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.627193  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.642980  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0408 12:42:31.643395  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.644144  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.644166  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.644557  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.644768  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.646980  433881 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 12:42:31.648378  433881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:42:31.648694  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.648732  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.663924  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0408 12:42:31.664361  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.664884  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.664910  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.665218  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.665445  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.701652  433881 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:42:31.703025  433881 start.go:297] selected driver: kvm2
	I0408 12:42:31.703041  433881 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.703192  433881 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:42:31.703924  433881 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.704018  433881 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:42:31.719599  433881 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:42:31.720001  433881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:42:31.720084  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:42:31.720102  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:42:31.720156  433881 start.go:340] cluster config:
	{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.720330  433881 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.722299  433881 out.go:177] * Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	I0408 12:42:31.723540  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:42:31.723577  433881 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:42:31.723594  433881 cache.go:56] Caching tarball of preloaded images
	I0408 12:42:31.723718  433881 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:42:31.723733  433881 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:42:31.723846  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:42:31.724039  433881 start.go:360] acquireMachinesLock for old-k8s-version-384148: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:42:32.207974  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:38.288048  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:41.359947  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:47.439972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:50.512009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:56.591982  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:59.664002  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:05.744032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:08.816017  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:14.895990  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:17.967942  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:24.048010  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:27.119964  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:33.200067  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:36.272037  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:42.351972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:45.424082  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:51.503992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:54.576088  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:00.656001  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:03.728079  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:09.807949  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:12.880051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:18.960024  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:22.032036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:28.112053  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:31.183992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:37.264032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:40.336026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:46.416019  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:49.487998  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:55.568026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:58.640044  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:04.719978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:07.792028  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:13.871997  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:16.944057  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:23.023969  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:26.096051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:32.176049  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:35.247929  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:41.328036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:44.399954  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:50.480046  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:53.552034  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:59.632009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:02.704063  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:08.784031  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:11.856098  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:17.936013  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:21.007970  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:27.087978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:30.159984  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:36.240042  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:39.245220  433557 start.go:364] duration metric: took 4m33.298555643s to acquireMachinesLock for "no-preload-135234"
	I0408 12:46:39.245298  433557 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:39.245311  433557 fix.go:54] fixHost starting: 
	I0408 12:46:39.245782  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:39.245821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:39.261035  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
	I0408 12:46:39.261632  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:39.262208  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:46:39.262234  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:39.262592  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:39.262819  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:39.262938  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:46:39.264995  433557 fix.go:112] recreateIfNeeded on no-preload-135234: state=Stopped err=<nil>
	I0408 12:46:39.265029  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	W0408 12:46:39.265203  433557 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:39.266971  433557 out.go:177] * Restarting existing kvm2 VM for "no-preload-135234" ...
	I0408 12:46:39.268140  433557 main.go:141] libmachine: (no-preload-135234) Calling .Start
	I0408 12:46:39.268315  433557 main.go:141] libmachine: (no-preload-135234) Ensuring networks are active...
	I0408 12:46:39.269323  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network default is active
	I0408 12:46:39.269669  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network mk-no-preload-135234 is active
	I0408 12:46:39.270047  433557 main.go:141] libmachine: (no-preload-135234) Getting domain xml...
	I0408 12:46:39.270763  433557 main.go:141] libmachine: (no-preload-135234) Creating domain...
	I0408 12:46:40.496145  433557 main.go:141] libmachine: (no-preload-135234) Waiting to get IP...
	I0408 12:46:40.497357  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.497870  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.497950  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.497853  434768 retry.go:31] will retry after 305.764185ms: waiting for machine to come up
	I0408 12:46:40.805894  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.806351  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.806380  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.806304  434768 retry.go:31] will retry after 359.02584ms: waiting for machine to come up
	I0408 12:46:39.242442  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:39.242498  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.242871  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:46:39.242935  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.243206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:46:39.245063  433439 machine.go:97] duration metric: took 4m37.367683512s to provisionDockerMachine
	I0408 12:46:39.245112  433439 fix.go:56] duration metric: took 4m37.391017413s for fixHost
	I0408 12:46:39.245118  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 4m37.391040241s
	W0408 12:46:39.245140  433439 start.go:713] error starting host: provision: host is not running
	W0408 12:46:39.245388  433439 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0408 12:46:39.245401  433439 start.go:728] Will try again in 5 seconds ...
	I0408 12:46:41.167272  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.167748  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.167779  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.167702  434768 retry.go:31] will retry after 412.762727ms: waiting for machine to come up
	I0408 12:46:41.582454  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.582959  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.582990  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.582904  434768 retry.go:31] will retry after 572.486121ms: waiting for machine to come up
	I0408 12:46:42.156830  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.157270  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.157294  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.157243  434768 retry.go:31] will retry after 706.130574ms: waiting for machine to come up
	I0408 12:46:42.865325  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.865829  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.865863  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.865762  434768 retry.go:31] will retry after 901.114252ms: waiting for machine to come up
	I0408 12:46:43.768578  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:43.769067  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:43.769103  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:43.769032  434768 retry.go:31] will retry after 1.160836088s: waiting for machine to come up
	I0408 12:46:44.931002  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:44.931408  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:44.931438  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:44.931349  434768 retry.go:31] will retry after 998.940623ms: waiting for machine to come up
	I0408 12:46:44.247774  433439 start.go:360] acquireMachinesLock for default-k8s-diff-port-527454: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:46:45.931728  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:45.932157  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:45.932241  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:45.932115  434768 retry.go:31] will retry after 1.43975568s: waiting for machine to come up
	I0408 12:46:47.373294  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:47.373786  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:47.373821  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:47.373733  434768 retry.go:31] will retry after 1.828434336s: waiting for machine to come up
	I0408 12:46:49.205019  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:49.205414  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:49.205462  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:49.205376  434768 retry.go:31] will retry after 2.847051956s: waiting for machine to come up
	I0408 12:46:52.055004  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:52.055561  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:52.055586  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:52.055517  434768 retry.go:31] will retry after 2.941262871s: waiting for machine to come up
	I0408 12:46:54.998158  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:54.998598  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:54.998631  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:54.998542  434768 retry.go:31] will retry after 3.082026915s: waiting for machine to come up
	I0408 12:46:59.561049  433674 start.go:364] duration metric: took 4m43.922045129s to acquireMachinesLock for "embed-certs-488947"
	I0408 12:46:59.561130  433674 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:59.561140  433674 fix.go:54] fixHost starting: 
	I0408 12:46:59.561636  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:59.561683  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:59.578117  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I0408 12:46:59.578573  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:59.579047  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:46:59.579074  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:59.579432  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:59.579633  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:46:59.579852  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:46:59.581445  433674 fix.go:112] recreateIfNeeded on embed-certs-488947: state=Stopped err=<nil>
	I0408 12:46:59.581492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	W0408 12:46:59.581667  433674 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:59.584306  433674 out.go:177] * Restarting existing kvm2 VM for "embed-certs-488947" ...
	I0408 12:46:59.585750  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Start
	I0408 12:46:59.585971  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring networks are active...
	I0408 12:46:59.586749  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network default is active
	I0408 12:46:59.587136  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network mk-embed-certs-488947 is active
	I0408 12:46:59.587551  433674 main.go:141] libmachine: (embed-certs-488947) Getting domain xml...
	I0408 12:46:59.588302  433674 main.go:141] libmachine: (embed-certs-488947) Creating domain...
	I0408 12:46:58.084025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084608  433557 main.go:141] libmachine: (no-preload-135234) Found IP for machine: 192.168.61.48
	I0408 12:46:58.084660  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has current primary IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084668  433557 main.go:141] libmachine: (no-preload-135234) Reserving static IP address...
	I0408 12:46:58.085160  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.085198  433557 main.go:141] libmachine: (no-preload-135234) Reserved static IP address: 192.168.61.48
	I0408 12:46:58.085213  433557 main.go:141] libmachine: (no-preload-135234) DBG | skip adding static IP to network mk-no-preload-135234 - found existing host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"}
	I0408 12:46:58.085229  433557 main.go:141] libmachine: (no-preload-135234) DBG | Getting to WaitForSSH function...
	I0408 12:46:58.085240  433557 main.go:141] libmachine: (no-preload-135234) Waiting for SSH to be available...
	I0408 12:46:58.087595  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.087990  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.088025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.088155  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH client type: external
	I0408 12:46:58.088178  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa (-rw-------)
	I0408 12:46:58.088210  433557 main.go:141] libmachine: (no-preload-135234) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:46:58.088228  433557 main.go:141] libmachine: (no-preload-135234) DBG | About to run SSH command:
	I0408 12:46:58.088241  433557 main.go:141] libmachine: (no-preload-135234) DBG | exit 0
	I0408 12:46:58.220043  433557 main.go:141] libmachine: (no-preload-135234) DBG | SSH cmd err, output: <nil>: 
	I0408 12:46:58.220440  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetConfigRaw
	I0408 12:46:58.221216  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.223881  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224184  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.224202  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224597  433557 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/config.json ...
	I0408 12:46:58.224804  433557 machine.go:94] provisionDockerMachine start ...
	I0408 12:46:58.224828  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:58.225070  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.227668  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228048  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.228080  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228242  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.228438  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228647  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228780  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.228941  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.229238  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.229253  433557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:46:58.344562  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:46:58.344602  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.344888  433557 buildroot.go:166] provisioning hostname "no-preload-135234"
	I0408 12:46:58.344922  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.345147  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.347895  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348278  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.348311  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348433  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.348638  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348801  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348911  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.349077  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.349289  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.349303  433557 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-135234 && echo "no-preload-135234" | sudo tee /etc/hostname
	I0408 12:46:58.478959  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-135234
	
	I0408 12:46:58.478996  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.481692  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482164  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.482187  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482410  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.482643  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.482851  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.483032  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.483230  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.483446  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.483465  433557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-135234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-135234/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-135234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:46:58.606022  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:58.606059  433557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:46:58.606080  433557 buildroot.go:174] setting up certificates
	I0408 12:46:58.606092  433557 provision.go:84] configureAuth start
	I0408 12:46:58.606108  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.606465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.609605  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610046  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.610079  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.612452  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612756  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.612784  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612905  433557 provision.go:143] copyHostCerts
	I0408 12:46:58.612974  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:46:58.613029  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:46:58.613097  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:46:58.613200  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:46:58.613209  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:46:58.613232  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:46:58.613295  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:46:58.613302  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:46:58.613323  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:46:58.613438  433557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.no-preload-135234 san=[127.0.0.1 192.168.61.48 localhost minikube no-preload-135234]
	I0408 12:46:58.832264  433557 provision.go:177] copyRemoteCerts
	I0408 12:46:58.832335  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:46:58.832382  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.835259  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835609  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.835650  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835883  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.836158  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.836332  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.836468  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:58.922968  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:46:58.949601  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 12:46:58.976832  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:46:59.004643  433557 provision.go:87] duration metric: took 398.533019ms to configureAuth
	I0408 12:46:59.004683  433557 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:46:59.004885  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:46:59.004988  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.008264  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008735  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.008783  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008987  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.009238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009416  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009542  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.009680  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.009866  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.009884  433557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:46:59.299880  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:46:59.299912  433557 machine.go:97] duration metric: took 1.075094362s to provisionDockerMachine
	I0408 12:46:59.299925  433557 start.go:293] postStartSetup for "no-preload-135234" (driver="kvm2")
	I0408 12:46:59.299940  433557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:46:59.299981  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.300373  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:46:59.300406  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.303274  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303769  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.303806  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303941  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.304222  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.304575  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.304874  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.395808  433557 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:46:59.400795  433557 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:46:59.400831  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:46:59.400914  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:46:59.401021  433557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:46:59.401162  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:46:59.411883  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:46:59.438486  433557 start.go:296] duration metric: took 138.54299ms for postStartSetup
	I0408 12:46:59.438546  433557 fix.go:56] duration metric: took 20.19323532s for fixHost
	I0408 12:46:59.438577  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.441875  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442334  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.442366  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442528  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.442753  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.442969  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.443101  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.443232  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.443414  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.443424  433557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:46:59.560853  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580419.531854515
	
	I0408 12:46:59.560881  433557 fix.go:216] guest clock: 1712580419.531854515
	I0408 12:46:59.560891  433557 fix.go:229] Guest: 2024-04-08 12:46:59.531854515 +0000 UTC Remote: 2024-04-08 12:46:59.438552641 +0000 UTC m=+293.653384531 (delta=93.301874ms)
	I0408 12:46:59.560918  433557 fix.go:200] guest clock delta is within tolerance: 93.301874ms
	I0408 12:46:59.560929  433557 start.go:83] releasing machines lock for "no-preload-135234", held for 20.315655744s
	I0408 12:46:59.560965  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.561244  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:59.564248  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564623  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.564658  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564758  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565245  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565434  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565524  433557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:46:59.565571  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.565726  433557 ssh_runner.go:195] Run: cat /version.json
	I0408 12:46:59.565752  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.568339  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568729  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568766  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.568789  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568931  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569139  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569201  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.569227  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.569300  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569392  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569486  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569647  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.569782  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569900  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.689264  433557 ssh_runner.go:195] Run: systemctl --version
	I0408 12:46:59.695704  433557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:46:59.848323  433557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:46:59.856068  433557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:46:59.856171  433557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:46:59.877460  433557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:46:59.877490  433557 start.go:494] detecting cgroup driver to use...
	I0408 12:46:59.877557  433557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:46:59.895329  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:46:59.910849  433557 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:46:59.910908  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:46:59.925541  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:46:59.941511  433557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:00.064454  433557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:00.218535  433557 docker.go:233] disabling docker service ...
	I0408 12:47:00.218614  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:00.234510  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:00.249703  433557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:00.403556  433557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:00.569324  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:00.585058  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:00.607536  433557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:00.607592  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.624701  433557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:00.624774  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.637414  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.649846  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.662725  433557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:00.675738  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.688667  433557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.710326  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.722619  433557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:00.734130  433557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:00.734227  433557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:00.749998  433557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:00.761556  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:00.881544  433557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:01.036952  433557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:01.037040  433557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:01.042260  433557 start.go:562] Will wait 60s for crictl version
	I0408 12:47:01.042329  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.046327  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:01.092359  433557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:01.092465  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.127373  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.165027  433557 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0408 12:47:00.888196  433674 main.go:141] libmachine: (embed-certs-488947) Waiting to get IP...
	I0408 12:47:00.889196  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:00.889766  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:00.889808  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:00.889702  434916 retry.go:31] will retry after 239.282192ms: waiting for machine to come up
	I0408 12:47:01.130508  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.131075  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.131111  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.131016  434916 retry.go:31] will retry after 388.837258ms: waiting for machine to come up
	I0408 12:47:01.522006  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.522413  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.522444  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.522364  434916 retry.go:31] will retry after 372.310428ms: waiting for machine to come up
	I0408 12:47:01.896325  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.896919  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.896954  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.896851  434916 retry.go:31] will retry after 574.930775ms: waiting for machine to come up
	I0408 12:47:02.474045  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.474626  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.474664  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.474557  434916 retry.go:31] will retry after 506.414729ms: waiting for machine to come up
	I0408 12:47:02.982589  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.983203  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.983238  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.983135  434916 retry.go:31] will retry after 614.351996ms: waiting for machine to come up
	I0408 12:47:03.599165  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:03.599682  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:03.599724  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:03.599640  434916 retry.go:31] will retry after 1.130025801s: waiting for machine to come up
	I0408 12:47:04.731350  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:04.731841  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:04.731874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:04.731791  434916 retry.go:31] will retry after 1.346613974s: waiting for machine to come up
	I0408 12:47:01.166849  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:47:01.169772  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170183  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:01.170211  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170523  433557 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:01.175336  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:01.193759  433557 kubeadm.go:877] updating cluster {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:01.193949  433557 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 12:47:01.194017  433557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:01.234439  433557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0408 12:47:01.234466  433557 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:01.234547  433557 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.234575  433557 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.234589  433557 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.234625  433557 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0408 12:47:01.234576  433557 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.234562  433557 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.234696  433557 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.234554  433557 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.236654  433557 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.236678  433557 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.236701  433557 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0408 12:47:01.236686  433557 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.236630  433557 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236789  433557 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.475737  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.476344  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.482596  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.486680  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.490012  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.496685  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0408 12:47:01.510269  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.597119  433557 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0408 12:47:01.597179  433557 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.597238  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696018  433557 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0408 12:47:01.696123  433557 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.696148  433557 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0408 12:47:01.696196  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696201  433557 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.696237  433557 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0408 12:47:01.696254  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696265  433557 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.696299  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.710260  433557 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0408 12:47:01.710317  433557 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.710369  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799524  433557 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0408 12:47:01.799583  433557 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.799592  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.799616  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.799626  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.799618  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799679  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.799734  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.916654  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0408 12:47:01.916701  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.916783  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:01.916809  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.923863  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.923904  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.923974  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.924021  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.924065  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924176  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924067  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.926651  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0408 12:47:01.926681  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926722  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926783  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0408 12:47:01.974801  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0408 12:47:01.974875  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974939  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:01.974969  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974944  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0408 12:47:02.062944  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.916991  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.990237597s)
	I0408 12:47:04.917016  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.942055075s)
	I0408 12:47:04.917036  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0408 12:47:04.917040  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0408 12:47:04.917047  433557 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917098  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917117  433557 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.854126587s)
	I0408 12:47:04.917187  433557 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0408 12:47:04.917233  433557 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.917278  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:06.080429  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:06.080910  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:06.080942  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:06.080866  434916 retry.go:31] will retry after 1.125692215s: waiting for machine to come up
	I0408 12:47:07.208553  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:07.209015  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:07.209040  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:07.208961  434916 retry.go:31] will retry after 1.958080491s: waiting for machine to come up
	I0408 12:47:09.169878  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:09.170289  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:09.170319  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:09.170243  434916 retry.go:31] will retry after 2.241966019s: waiting for machine to come up
	I0408 12:47:08.833969  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.916836964s)
	I0408 12:47:08.834011  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0408 12:47:08.834029  433557 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834032  433557 ssh_runner.go:235] Completed: which crictl: (3.916731005s)
	I0408 12:47:08.834085  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834101  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:11.414435  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:11.414829  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:11.414851  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:11.414786  434916 retry.go:31] will retry after 2.815941766s: waiting for machine to come up
	I0408 12:47:14.233868  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:14.234272  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:14.234318  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:14.234228  434916 retry.go:31] will retry after 3.213192238s: waiting for machine to come up
	I0408 12:47:10.925471  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091353526s)
	I0408 12:47:10.925519  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0408 12:47:10.925542  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925581  433557 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.091434251s)
	I0408 12:47:10.925612  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925673  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 12:47:10.925782  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:12.405175  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.479529413s)
	I0408 12:47:12.405221  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0408 12:47:12.405238  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:12.405236  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.479424271s)
	I0408 12:47:12.405270  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0408 12:47:12.405296  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:14.283021  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (1.877693108s)
	I0408 12:47:14.283061  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0408 12:47:14.283079  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:14.283143  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:18.781552  433881 start.go:364] duration metric: took 4m47.057472647s to acquireMachinesLock for "old-k8s-version-384148"
	I0408 12:47:18.781636  433881 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:18.781645  433881 fix.go:54] fixHost starting: 
	I0408 12:47:18.782123  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:18.782168  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:18.804263  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0408 12:47:18.804759  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:18.805376  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:47:18.805407  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:18.805815  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:18.806091  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:18.806265  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetState
	I0408 12:47:18.809884  433881 fix.go:112] recreateIfNeeded on old-k8s-version-384148: state=Stopped err=<nil>
	I0408 12:47:18.809915  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	W0408 12:47:18.810103  433881 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:18.812906  433881 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-384148" ...
	I0408 12:47:17.451190  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451657  433674 main.go:141] libmachine: (embed-certs-488947) Found IP for machine: 192.168.72.159
	I0408 12:47:17.451705  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has current primary IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451725  433674 main.go:141] libmachine: (embed-certs-488947) Reserving static IP address...
	I0408 12:47:17.452192  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.452239  433674 main.go:141] libmachine: (embed-certs-488947) Reserved static IP address: 192.168.72.159
	I0408 12:47:17.452259  433674 main.go:141] libmachine: (embed-certs-488947) DBG | skip adding static IP to network mk-embed-certs-488947 - found existing host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"}
	I0408 12:47:17.452282  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Getting to WaitForSSH function...
	I0408 12:47:17.452297  433674 main.go:141] libmachine: (embed-certs-488947) Waiting for SSH to be available...
	I0408 12:47:17.454780  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455169  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.455208  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH client type: external
	I0408 12:47:17.455354  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa (-rw-------)
	I0408 12:47:17.455384  433674 main.go:141] libmachine: (embed-certs-488947) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:17.455401  433674 main.go:141] libmachine: (embed-certs-488947) DBG | About to run SSH command:
	I0408 12:47:17.455414  433674 main.go:141] libmachine: (embed-certs-488947) DBG | exit 0
	I0408 12:47:17.585037  433674 main.go:141] libmachine: (embed-certs-488947) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:17.585443  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetConfigRaw
	I0408 12:47:17.586184  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.589492  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.589953  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.589985  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.590269  433674 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/config.json ...
	I0408 12:47:17.590518  433674 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:17.590550  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:17.590798  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.593968  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594570  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.594615  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594832  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.595073  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595236  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595442  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.595661  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.595892  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.595905  433674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:17.708468  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:17.708504  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.708857  433674 buildroot.go:166] provisioning hostname "embed-certs-488947"
	I0408 12:47:17.708890  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.709083  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.712242  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712698  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.712732  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712928  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.713122  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713298  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713433  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.713612  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.713801  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.713817  433674 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-488947 && echo "embed-certs-488947" | sudo tee /etc/hostname
	I0408 12:47:17.842964  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-488947
	
	I0408 12:47:17.843017  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.846436  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.846959  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.846992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.847225  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.847486  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847726  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847945  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.848182  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.848373  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.848397  433674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-488947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-488947/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-488947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:17.975087  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:17.975123  433674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:17.975178  433674 buildroot.go:174] setting up certificates
	I0408 12:47:17.975198  433674 provision.go:84] configureAuth start
	I0408 12:47:17.975212  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.975606  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.979028  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979483  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.979510  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979754  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.982474  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.982944  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.982977  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.983174  433674 provision.go:143] copyHostCerts
	I0408 12:47:17.983230  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:17.983240  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:17.983291  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:17.983408  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:17.983419  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:17.983444  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:17.983500  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:17.983507  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:17.983526  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:17.983580  433674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.embed-certs-488947 san=[127.0.0.1 192.168.72.159 embed-certs-488947 localhost minikube]
	I0408 12:47:18.043022  433674 provision.go:177] copyRemoteCerts
	I0408 12:47:18.043092  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:18.043162  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.046335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046722  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.046757  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046904  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.047145  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.047333  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.047475  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.134761  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:18.163745  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 12:47:18.192946  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:18.220790  433674 provision.go:87] duration metric: took 245.573885ms to configureAuth
	I0408 12:47:18.220827  433674 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:18.221067  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:47:18.221175  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.224177  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.224805  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.224839  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.225098  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.225363  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225569  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225797  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.226024  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.226202  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.226219  433674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:18.522682  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:18.522718  433674 machine.go:97] duration metric: took 932.18024ms to provisionDockerMachine
	I0408 12:47:18.522735  433674 start.go:293] postStartSetup for "embed-certs-488947" (driver="kvm2")
	I0408 12:47:18.522750  433674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:18.522776  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.523133  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:18.523174  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.526523  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.526872  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.526903  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.527101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.527336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.527512  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.527692  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.615353  433674 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:18.620414  433674 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:18.620447  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:18.620525  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:18.620627  433674 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:18.620726  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:18.630585  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:18.658952  433674 start.go:296] duration metric: took 136.200863ms for postStartSetup
	I0408 12:47:18.659004  433674 fix.go:56] duration metric: took 19.097863992s for fixHost
	I0408 12:47:18.659037  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.662115  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662571  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.662606  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662843  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.663100  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663308  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663480  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.663676  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.663919  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.663939  433674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:18.781355  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580438.730334929
	
	I0408 12:47:18.781402  433674 fix.go:216] guest clock: 1712580438.730334929
	I0408 12:47:18.781427  433674 fix.go:229] Guest: 2024-04-08 12:47:18.730334929 +0000 UTC Remote: 2024-04-08 12:47:18.659010209 +0000 UTC m=+303.178294166 (delta=71.32472ms)
	I0408 12:47:18.781457  433674 fix.go:200] guest clock delta is within tolerance: 71.32472ms
	I0408 12:47:18.781465  433674 start.go:83] releasing machines lock for "embed-certs-488947", held for 19.22036189s
	I0408 12:47:18.781502  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.781800  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:18.784825  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785270  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.785313  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786104  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786456  433674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:18.786501  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.786626  433674 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:18.786660  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.789409  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789704  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790019  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790149  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790306  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790322  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790338  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790495  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790528  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790745  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.790867  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790997  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.911025  433674 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:18.917785  433674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:19.070383  433674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:19.077521  433674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:19.077606  433674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:19.094598  433674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:19.094636  433674 start.go:494] detecting cgroup driver to use...
	I0408 12:47:19.094750  433674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:19.111163  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:19.125621  433674 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:19.125688  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:19.141948  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:19.156671  433674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:19.281688  433674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:19.455445  433674 docker.go:233] disabling docker service ...
	I0408 12:47:19.455519  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:19.474594  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:19.491301  433674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:19.646063  433674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:19.786075  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:19.803535  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:19.829204  433674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:19.829282  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.842132  433674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:19.842201  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.853915  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.866449  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.879235  433674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:19.899411  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.920363  433674 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.946414  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.958824  433674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:19.969691  433674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:19.969754  433674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:19.986458  433674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:19.998655  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:20.157494  433674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:20.318209  433674 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:20.318287  433674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:20.325414  433674 start.go:562] Will wait 60s for crictl version
	I0408 12:47:20.325490  433674 ssh_runner.go:195] Run: which crictl
	I0408 12:47:20.330070  433674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:20.383808  433674 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:20.383959  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.417705  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.454321  433674 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:47:20.456101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:20.460035  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.460734  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:20.460774  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.461140  433674 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:20.467650  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:20.486936  433674 kubeadm.go:877] updating cluster {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:20.487105  433674 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:47:20.487176  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:20.529152  433674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:47:20.529293  433674 ssh_runner.go:195] Run: which lz4
	I0408 12:47:16.552712  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.26954566s)
	I0408 12:47:16.552781  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0408 12:47:16.552797  433557 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:16.552839  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:17.512103  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 12:47:17.512151  433557 cache_images.go:123] Successfully loaded all cached images
	I0408 12:47:17.512158  433557 cache_images.go:92] duration metric: took 16.277680364s to LoadCachedImages
	I0408 12:47:17.512171  433557 kubeadm.go:928] updating node { 192.168.61.48 8443 v1.30.0-rc.0 crio true true} ...
	I0408 12:47:17.512324  433557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-135234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:17.512440  433557 ssh_runner.go:195] Run: crio config
	I0408 12:47:17.561382  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:17.561424  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:17.561441  433557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:17.561472  433557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-135234 NodeName:no-preload-135234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:17.561681  433557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-135234"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:17.561807  433557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0408 12:47:17.574237  433557 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:17.574321  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:17.587129  433557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0408 12:47:17.609022  433557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0408 12:47:17.629656  433557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0408 12:47:17.650373  433557 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:17.655031  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:17.670872  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:17.811548  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:17.830945  433557 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234 for IP: 192.168.61.48
	I0408 12:47:17.830974  433557 certs.go:194] generating shared ca certs ...
	I0408 12:47:17.831000  433557 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:17.831219  433557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:17.831277  433557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:17.831290  433557 certs.go:256] generating profile certs ...
	I0408 12:47:17.831453  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/client.key
	I0408 12:47:17.831521  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key.dbd08c09
	I0408 12:47:17.831577  433557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key
	I0408 12:47:17.831823  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:17.831891  433557 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:17.831906  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:17.831946  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:17.831978  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:17.832007  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:17.832059  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:17.832899  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:17.869894  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:17.902893  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:17.943547  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:17.990462  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 12:47:18.026697  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:18.055643  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:18.083357  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:47:18.109247  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:18.134513  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:18.161811  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:18.189968  433557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:18.210173  433557 ssh_runner.go:195] Run: openssl version
	I0408 12:47:18.216813  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:18.230693  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236461  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236526  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.244183  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:18.257589  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:18.271235  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277004  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277088  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.283549  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:18.296789  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:18.309587  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314537  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314608  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.320942  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:18.333407  433557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:18.338637  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:18.345365  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:18.352262  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:18.359464  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:18.366233  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:18.373280  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:18.380134  433557 kubeadm.go:391] StartCluster: {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:18.380291  433557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:18.380403  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.423068  433557 cri.go:89] found id: ""
	I0408 12:47:18.423164  433557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:18.435458  433557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:18.435497  433557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:18.435503  433557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:18.435562  433557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:18.447509  433557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:18.448720  433557 kubeconfig.go:125] found "no-preload-135234" server: "https://192.168.61.48:8443"
	I0408 12:47:18.451154  433557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:18.463246  433557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.48
	I0408 12:47:18.463299  433557 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:18.463315  433557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:18.463394  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.522929  433557 cri.go:89] found id: ""
	I0408 12:47:18.523011  433557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:18.546346  433557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:18.558613  433557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:18.558640  433557 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:18.558714  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:18.570020  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:18.570106  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:18.581323  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:18.593718  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:18.593778  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:18.606889  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.619251  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:18.619320  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.632343  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:18.644913  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:18.645004  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:18.656965  433557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:18.670774  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:18.785507  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:19.988135  433557 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.202584017s)
	I0408 12:47:19.988174  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.235430  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.316709  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.456307  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:20.456393  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:18.814842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .Start
	I0408 12:47:18.815096  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring networks are active...
	I0408 12:47:18.816155  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network default is active
	I0408 12:47:18.816608  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network mk-old-k8s-version-384148 is active
	I0408 12:47:18.817061  433881 main.go:141] libmachine: (old-k8s-version-384148) Getting domain xml...
	I0408 12:47:18.817951  433881 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:47:20.144750  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting to get IP...
	I0408 12:47:20.145850  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.146334  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.146403  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.146320  435057 retry.go:31] will retry after 230.92081ms: waiting for machine to come up
	I0408 12:47:20.378905  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.379518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.379572  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.379474  435057 retry.go:31] will retry after 383.208004ms: waiting for machine to come up
	I0408 12:47:20.764287  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.764883  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.764936  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.764858  435057 retry.go:31] will retry after 430.674899ms: waiting for machine to come up
	I0408 12:47:21.197738  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.198231  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.198255  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.198190  435057 retry.go:31] will retry after 553.905508ms: waiting for machine to come up
	I0408 12:47:20.534154  433674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:20.538991  433674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:20.539034  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:47:22.249270  433674 crio.go:462] duration metric: took 1.715182486s to copy over tarball
	I0408 12:47:22.249391  433674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:24.966695  433674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.717265287s)
	I0408 12:47:24.966730  433674 crio.go:469] duration metric: took 2.717416948s to extract the tarball
	I0408 12:47:24.966740  433674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:25.007656  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:25.063445  433674 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:47:25.063482  433674 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:47:25.063494  433674 kubeadm.go:928] updating node { 192.168.72.159 8443 v1.29.3 crio true true} ...
	I0408 12:47:25.063627  433674 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-488947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:25.063745  433674 ssh_runner.go:195] Run: crio config
	I0408 12:47:25.122219  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:25.122282  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:25.122298  433674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:25.122330  433674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-488947 NodeName:embed-certs-488947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:25.122556  433674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-488947"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:25.122633  433674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:47:25.137001  433674 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:25.137148  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:25.151168  433674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0408 12:47:25.171698  433674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:25.195101  433674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0408 12:47:25.216873  433674 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:25.221155  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:25.235740  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:25.354135  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:25.377763  433674 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947 for IP: 192.168.72.159
	I0408 12:47:25.377801  433674 certs.go:194] generating shared ca certs ...
	I0408 12:47:25.377824  433674 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:25.378055  433674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:25.378137  433674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:25.378161  433674 certs.go:256] generating profile certs ...
	I0408 12:47:25.378299  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/client.key
	I0408 12:47:25.378391  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key.21d2a89c
	I0408 12:47:25.378460  433674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key
	I0408 12:47:25.378628  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:25.378687  433674 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:25.378702  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:25.378736  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:25.378780  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:25.378818  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:25.378888  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:25.379800  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:25.422370  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:25.468967  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:25.516750  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:20.956916  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.456948  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.957498  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.982763  433557 api_server.go:72] duration metric: took 1.526450888s to wait for apiserver process to appear ...
	I0408 12:47:21.982797  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:21.982852  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.363696  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.363732  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.363758  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.398003  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.398065  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.483280  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:21.754065  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.754814  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.754849  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.754719  435057 retry.go:31] will retry after 678.896106ms: waiting for machine to come up
	I0408 12:47:22.435899  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:22.436481  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:22.436518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:22.436426  435057 retry.go:31] will retry after 624.721191ms: waiting for machine to come up
	I0408 12:47:23.063619  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:23.064268  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:23.064290  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:23.064183  435057 retry.go:31] will retry after 1.072067437s: waiting for machine to come up
	I0408 12:47:24.137999  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:24.138573  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:24.138607  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:24.138517  435057 retry.go:31] will retry after 1.238721936s: waiting for machine to come up
	I0408 12:47:25.378512  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:25.378929  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:25.378956  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:25.378819  435057 retry.go:31] will retry after 1.314708825s: waiting for machine to come up
	I0408 12:47:26.461241  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.461305  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.461321  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.482518  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.482566  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.483554  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.497035  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.497075  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.983270  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.996515  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.996556  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.483125  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.491506  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.491549  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.983839  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.991044  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.991090  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.483669  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.490665  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:28.490703  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.983248  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.998278  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:47:29.007388  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:47:29.007429  433557 api_server.go:131] duration metric: took 7.024624495s to wait for apiserver health ...
	I0408 12:47:29.007444  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:29.007452  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:29.009506  433557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:25.561601  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0408 12:47:26.087896  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:26.116559  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:26.145651  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:26.174910  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:26.206627  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:26.238398  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:26.281684  433674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:26.306417  433674 ssh_runner.go:195] Run: openssl version
	I0408 12:47:26.313279  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:26.328106  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333727  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333810  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.340200  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:26.352316  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:26.364788  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.369928  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.370003  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.376525  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:26.388232  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:26.400301  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405327  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405407  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.411586  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:26.423764  433674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:26.428995  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:26.435932  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:26.442742  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:26.451458  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:26.458715  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:26.466424  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:26.473948  433674 kubeadm.go:391] StartCluster: {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:26.474083  433674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:26.474158  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.515603  433674 cri.go:89] found id: ""
	I0408 12:47:26.515676  433674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:26.526818  433674 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:26.526845  433674 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:26.526851  433674 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:26.526908  433674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:26.537675  433674 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:26.538807  433674 kubeconfig.go:125] found "embed-certs-488947" server: "https://192.168.72.159:8443"
	I0408 12:47:26.540848  433674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:26.551278  433674 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.159
	I0408 12:47:26.551317  433674 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:26.551330  433674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:26.551406  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.591372  433674 cri.go:89] found id: ""
	I0408 12:47:26.591478  433674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:26.610486  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:26.621770  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:26.621794  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:26.621869  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:26.632480  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:26.632554  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:26.645878  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:26.659969  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:26.660068  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:26.670611  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.680945  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:26.681034  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.692201  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:26.703049  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:26.703126  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:26.715887  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:26.727464  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:26.956245  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.722655  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.973294  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.086774  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.203640  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:28.203755  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:28.704550  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.203852  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.704305  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.724333  433674 api_server.go:72] duration metric: took 1.520681062s to wait for apiserver process to appear ...
	I0408 12:47:29.724372  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:29.724402  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:29.010843  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:29.029631  433557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:29.052609  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:29.069954  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:29.070010  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:29.070022  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:29.070034  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:29.070043  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:29.070049  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:47:29.070076  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:29.070087  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:29.070098  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:47:29.070107  433557 system_pods.go:74] duration metric: took 17.469317ms to wait for pod list to return data ...
	I0408 12:47:29.070117  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:29.075401  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:29.075443  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:29.075459  433557 node_conditions.go:105] duration metric: took 5.335891ms to run NodePressure ...
	I0408 12:47:29.075489  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:29.403218  433557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409235  433557 kubeadm.go:733] kubelet initialised
	I0408 12:47:29.409263  433557 kubeadm.go:734] duration metric: took 6.014758ms waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409276  433557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:29.418787  433557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.441264  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441310  433557 pod_ready.go:81] duration metric: took 22.478832ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.441325  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441336  433557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.461805  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461916  433557 pod_ready.go:81] duration metric: took 20.564997ms for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.461945  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461982  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.475160  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475198  433557 pod_ready.go:81] duration metric: took 13.191566ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.475229  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475241  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.486266  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486306  433557 pod_ready.go:81] duration metric: took 11.046794ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.486321  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486331  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.857658  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857703  433557 pod_ready.go:81] duration metric: took 371.357848ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.857717  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857725  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.258154  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258194  433557 pod_ready.go:81] duration metric: took 400.459219ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.258208  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258230  433557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.656845  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656890  433557 pod_ready.go:81] duration metric: took 398.64565ms for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.656904  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656915  433557 pod_ready.go:38] duration metric: took 1.247627349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:30.656947  433557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:47:30.683024  433557 ops.go:34] apiserver oom_adj: -16
	I0408 12:47:30.683055  433557 kubeadm.go:591] duration metric: took 12.247545723s to restartPrimaryControlPlane
	I0408 12:47:30.683067  433557 kubeadm.go:393] duration metric: took 12.302946s to StartCluster
	I0408 12:47:30.683095  433557 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.683214  433557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:30.685507  433557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.685852  433557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:47:30.687967  433557 out.go:177] * Verifying Kubernetes components...
	I0408 12:47:30.685951  433557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:47:30.686122  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:47:30.689462  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:30.689475  433557 addons.go:69] Setting storage-provisioner=true in profile "no-preload-135234"
	I0408 12:47:30.689511  433557 addons.go:234] Setting addon storage-provisioner=true in "no-preload-135234"
	W0408 12:47:30.689521  433557 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:47:30.689555  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.689573  433557 addons.go:69] Setting default-storageclass=true in profile "no-preload-135234"
	I0408 12:47:30.689620  433557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-135234"
	I0408 12:47:30.689956  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.689995  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.689996  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690026  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.690085  433557 addons.go:69] Setting metrics-server=true in profile "no-preload-135234"
	I0408 12:47:30.690135  433557 addons.go:234] Setting addon metrics-server=true in "no-preload-135234"
	W0408 12:47:30.690146  433557 addons.go:243] addon metrics-server should already be in state true
	I0408 12:47:30.690186  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.690614  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690692  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.710746  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0408 12:47:30.710947  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0408 12:47:30.711153  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0408 12:47:30.711301  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711752  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711839  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.712010  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712027  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712564  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.712757  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712780  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712911  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712926  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.713381  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.713427  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.713660  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714094  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714304  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.714365  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.714401  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.717892  433557 addons.go:234] Setting addon default-storageclass=true in "no-preload-135234"
	W0408 12:47:30.717959  433557 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:47:30.718004  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.718497  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.718577  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.734825  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0408 12:47:30.736890  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I0408 12:47:30.756599  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.756681  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.757290  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757312  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757318  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757332  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757774  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.757849  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.758015  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.758082  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.760658  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.760732  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.762999  433557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:47:30.764689  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:47:30.764714  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:47:30.766392  433557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:30.764741  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.767890  433557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:30.767911  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:47:30.767933  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.772580  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.772714  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773015  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773038  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773423  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773449  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773462  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773663  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773875  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.773897  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.774038  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774074  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774163  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.774227  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.779694  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0408 12:47:30.780190  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.780772  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.780793  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.781114  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.781773  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.781821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.803661  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0408 12:47:30.804212  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.804828  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.804847  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.805397  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.805713  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.807761  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.808244  433557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:30.808269  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:47:30.808288  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.811598  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812078  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.812109  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812264  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.812465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.812702  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.812868  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:26.695466  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:26.835234  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:26.835265  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:26.695884  435057 retry.go:31] will retry after 1.93787314s: waiting for machine to come up
	I0408 12:47:28.635479  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:28.636019  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:28.636052  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:28.635935  435057 retry.go:31] will retry after 1.906126524s: waiting for machine to come up
	I0408 12:47:30.544699  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:30.545145  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:30.545165  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:30.545084  435057 retry.go:31] will retry after 3.291404288s: waiting for machine to come up
	I0408 12:47:30.979880  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:31.004961  433557 node_ready.go:35] waiting up to 6m0s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:31.088114  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:31.110971  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:47:31.111017  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:47:31.150193  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:47:31.150229  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:47:31.184811  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.184899  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:47:31.214364  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.244802  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:32.406228  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.318067686s)
	I0408 12:47:32.406305  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406317  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.406830  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.406897  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.406913  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406921  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.407242  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407275  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407319  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.407329  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.532524  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.318098791s)
	I0408 12:47:32.532662  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532694  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.532576  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.287674494s)
	I0408 12:47:32.532774  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532799  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533022  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533041  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533052  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533060  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533223  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533280  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533286  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533294  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533301  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533457  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533516  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533539  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533546  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.534974  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.534991  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.535019  433557 addons.go:470] Verifying addon metrics-server=true in "no-preload-135234"
	I0408 12:47:32.543151  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.543183  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.543549  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.543571  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.546033  433557 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0408 12:47:32.894282  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:32.894320  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:32.894336  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:32.988397  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:32.988442  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.224790  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.232146  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.232176  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.724683  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.729479  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.729520  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:34.224919  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:34.230233  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:47:34.247835  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:47:34.247872  433674 api_server.go:131] duration metric: took 4.523492127s to wait for apiserver health ...
	I0408 12:47:34.247883  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:34.247890  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:34.249807  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:34.251603  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:34.265254  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:34.288078  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:34.301533  433674 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:34.301570  433674 system_pods.go:61] "coredns-76f75df574-hq2mm" [cfc7bd40-0b7d-4e00-ac55-b3ae796018ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:34.301577  433674 system_pods.go:61] "etcd-embed-certs-488947" [eb29ace5-8ad9-4080-a875-2eb83dcea583] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:34.301585  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [8e97033f-996a-4b64-9474-7b4d562eb1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:34.301591  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [b3db7631-d953-418e-9c72-f299d0287a2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:34.301595  433674 system_pods.go:61] "kube-proxy-2gn8m" [c31d8f0d-d6c1-4afa-b64c-7fc422d493f2] Running
	I0408 12:47:34.301600  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b9b29f85-7a75-4b09-b6cd-940ff42326d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:34.301604  433674 system_pods.go:61] "metrics-server-57f55c9bc5-z2ztl" [d9dc47ad-3370-4e55-a724-8c529c723992] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:34.301607  433674 system_pods.go:61] "storage-provisioner" [4953dc3a-31ca-464d-9530-34f488ed9a02] Running
	I0408 12:47:34.301617  433674 system_pods.go:74] duration metric: took 13.514139ms to wait for pod list to return data ...
	I0408 12:47:34.301624  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:34.305931  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:34.305962  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:34.305974  433674 node_conditions.go:105] duration metric: took 4.345624ms to run NodePressure ...
	I0408 12:47:34.305993  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:34.598392  433674 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603606  433674 kubeadm.go:733] kubelet initialised
	I0408 12:47:34.603632  433674 kubeadm.go:734] duration metric: took 5.204237ms waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603641  433674 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:34.610027  433674 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:32.547718  433557 addons.go:505] duration metric: took 1.861769291s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0408 12:47:33.008857  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:35.510251  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:33.837729  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:33.838183  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:33.838213  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:33.838133  435057 retry.go:31] will retry after 3.949072436s: waiting for machine to come up
	I0408 12:47:39.502172  433439 start.go:364] duration metric: took 55.254308447s to acquireMachinesLock for "default-k8s-diff-port-527454"
	I0408 12:47:39.502232  433439 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:39.502245  433439 fix.go:54] fixHost starting: 
	I0408 12:47:39.502725  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:39.502767  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:39.523738  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0408 12:47:39.525022  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:39.525614  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:47:39.525646  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:39.526077  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:39.526307  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:47:39.526448  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:47:39.528207  433439 fix.go:112] recreateIfNeeded on default-k8s-diff-port-527454: state=Stopped err=<nil>
	I0408 12:47:39.528241  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	W0408 12:47:39.528449  433439 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:39.530360  433439 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-527454" ...
	I0408 12:47:36.618430  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.619713  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.009213  433557 node_ready.go:49] node "no-preload-135234" has status "Ready":"True"
	I0408 12:47:38.009241  433557 node_ready.go:38] duration metric: took 7.004239102s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:38.009250  433557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:38.014665  433557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020024  433557 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:38.020054  433557 pod_ready.go:81] duration metric: took 5.358174ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020067  433557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:40.030803  433557 pod_ready.go:102] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:37.789177  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789704  433881 main.go:141] libmachine: (old-k8s-version-384148) Found IP for machine: 192.168.39.245
	I0408 12:47:37.789740  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has current primary IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789750  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserving static IP address...
	I0408 12:47:37.790172  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.790212  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | skip adding static IP to network mk-old-k8s-version-384148 - found existing host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"}
	I0408 12:47:37.790227  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserved static IP address: 192.168.39.245
	I0408 12:47:37.790244  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting for SSH to be available...
	I0408 12:47:37.790259  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Getting to WaitForSSH function...
	I0408 12:47:37.792465  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792793  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.792829  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792884  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH client type: external
	I0408 12:47:37.792932  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa (-rw-------)
	I0408 12:47:37.792974  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:37.793007  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | About to run SSH command:
	I0408 12:47:37.793018  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | exit 0
	I0408 12:47:37.920427  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:37.920854  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:47:37.921644  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:37.924168  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924631  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.924663  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924954  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:47:37.925170  433881 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:37.925191  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:37.925526  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:37.928176  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928552  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.928583  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928740  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:37.928916  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929095  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929260  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:37.929421  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:37.929626  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:37.929637  433881 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:38.044349  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:38.044378  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044695  433881 buildroot.go:166] provisioning hostname "old-k8s-version-384148"
	I0408 12:47:38.044728  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044955  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.047788  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048116  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.048149  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048291  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.048487  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.049024  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.049242  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.049258  433881 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-384148 && echo "old-k8s-version-384148" | sudo tee /etc/hostname
	I0408 12:47:38.175102  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-384148
	
	I0408 12:47:38.175132  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.178015  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178431  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.178461  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178659  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.178872  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179057  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179198  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.179347  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.179578  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.179604  433881 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-384148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-384148/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-384148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:38.306997  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:38.307037  433881 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:38.307072  433881 buildroot.go:174] setting up certificates
	I0408 12:47:38.307088  433881 provision.go:84] configureAuth start
	I0408 12:47:38.307099  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.307464  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:38.310078  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310595  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.310643  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.313155  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313521  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.313551  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313694  433881 provision.go:143] copyHostCerts
	I0408 12:47:38.313748  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:38.313768  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:38.313829  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:38.313919  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:38.313927  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:38.313945  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:38.314007  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:38.314014  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:38.314031  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:38.314080  433881 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-384148 san=[127.0.0.1 192.168.39.245 localhost minikube old-k8s-version-384148]
	I0408 12:47:38.748791  433881 provision.go:177] copyRemoteCerts
	I0408 12:47:38.748865  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:38.748895  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.752034  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752458  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.752499  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752695  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.752900  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.753075  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.753266  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:38.849144  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:47:38.880279  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:38.907293  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:38.936116  433881 provision.go:87] duration metric: took 629.014723ms to configureAuth
	I0408 12:47:38.936152  433881 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:38.936321  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:47:38.936403  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.939013  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939399  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.939457  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939593  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.939861  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940059  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940215  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.940377  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.940622  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.940648  433881 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:39.241516  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:39.241543  433881 machine.go:97] duration metric: took 1.316359736s to provisionDockerMachine
	I0408 12:47:39.241554  433881 start.go:293] postStartSetup for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:47:39.241566  433881 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:39.241585  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.241901  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:39.241935  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.244908  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245307  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.245336  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245486  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.245692  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.245890  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.246051  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.333612  433881 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:39.338826  433881 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:39.338853  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:39.338919  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:39.338988  433881 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:39.339071  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:39.352064  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:39.380881  433881 start.go:296] duration metric: took 139.30723ms for postStartSetup
	I0408 12:47:39.380939  433881 fix.go:56] duration metric: took 20.599293118s for fixHost
	I0408 12:47:39.380970  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.384147  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384556  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.384610  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384795  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.385010  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385212  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385411  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.385627  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:39.385869  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:39.385885  433881 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:39.501982  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580459.470646239
	
	I0408 12:47:39.502031  433881 fix.go:216] guest clock: 1712580459.470646239
	I0408 12:47:39.502042  433881 fix.go:229] Guest: 2024-04-08 12:47:39.470646239 +0000 UTC Remote: 2024-04-08 12:47:39.38094595 +0000 UTC m=+307.818603739 (delta=89.700289ms)
	I0408 12:47:39.502073  433881 fix.go:200] guest clock delta is within tolerance: 89.700289ms
	I0408 12:47:39.502084  433881 start.go:83] releasing machines lock for "old-k8s-version-384148", held for 20.720472846s
	I0408 12:47:39.502114  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.502407  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:39.505864  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506319  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.506352  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506704  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507318  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507574  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507677  433881 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:39.507767  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.507908  433881 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:39.507932  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.510993  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511077  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511476  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511522  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511563  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511589  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511743  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511923  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512084  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512093  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512239  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512246  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.512413  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.633304  433881 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:39.642014  433881 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:39.804068  433881 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:39.812237  433881 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:39.812324  433881 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:39.835586  433881 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:39.835621  433881 start.go:494] detecting cgroup driver to use...
	I0408 12:47:39.835721  433881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:39.860378  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:39.882019  433881 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:39.882096  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:39.898112  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:39.913562  433881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:40.047449  433881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:40.188730  433881 docker.go:233] disabling docker service ...
	I0408 12:47:40.188822  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:40.205050  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:40.222432  433881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:40.386332  433881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:40.561583  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:40.582135  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:40.611648  433881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:47:40.611751  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.629357  433881 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:40.629458  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.646030  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.661349  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.674997  433881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:40.688255  433881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:40.706703  433881 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:40.706763  433881 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:40.724839  433881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:40.738018  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:40.906300  433881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:41.073054  433881 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:41.073141  433881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:41.078610  433881 start.go:562] Will wait 60s for crictl version
	I0408 12:47:41.078679  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:41.083133  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:41.126948  433881 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:41.127101  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.160091  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.195044  433881 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:47:41.196514  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:41.199376  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.199831  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:41.199860  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.200145  433881 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:41.204867  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:41.221274  433881 kubeadm.go:877] updating cluster {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:41.221469  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:47:41.221550  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:41.275430  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:41.275531  433881 ssh_runner.go:195] Run: which lz4
	I0408 12:47:41.280606  433881 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:41.285549  433881 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:41.285606  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:47:39.531815  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Start
	I0408 12:47:39.531988  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring networks are active...
	I0408 12:47:39.532969  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network default is active
	I0408 12:47:39.533486  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network mk-default-k8s-diff-port-527454 is active
	I0408 12:47:39.533947  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Getting domain xml...
	I0408 12:47:39.534767  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Creating domain...
	I0408 12:47:40.935150  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting to get IP...
	I0408 12:47:40.936250  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:40.936778  435248 retry.go:31] will retry after 215.442539ms: waiting for machine to come up
	I0408 12:47:41.154393  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154940  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.154852  435248 retry.go:31] will retry after 274.982374ms: waiting for machine to come up
	I0408 12:47:41.431442  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.431990  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.432023  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.431933  435248 retry.go:31] will retry after 335.077282ms: waiting for machine to come up
	I0408 12:47:40.620537  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:42.622241  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:44.118493  433674 pod_ready.go:92] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.118532  433674 pod_ready.go:81] duration metric: took 9.508474788s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.118545  433674 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626843  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.626869  433674 pod_ready.go:81] duration metric: took 508.318376ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626882  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633488  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.633521  433674 pod_ready.go:81] duration metric: took 6.630145ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633535  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027744  433557 pod_ready.go:92] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.027771  433557 pod_ready.go:81] duration metric: took 3.007695895s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027782  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034038  433557 pod_ready.go:92] pod "kube-apiserver-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.034076  433557 pod_ready.go:81] duration metric: took 6.28617ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034090  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039232  433557 pod_ready.go:92] pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.039262  433557 pod_ready.go:81] duration metric: took 5.161613ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039277  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045793  433557 pod_ready.go:92] pod "kube-proxy-tr6td" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.045887  433557 pod_ready.go:81] duration metric: took 6.600896ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045908  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.209976  433557 pod_ready.go:92] pod "kube-scheduler-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.210003  433557 pod_ready.go:81] duration metric: took 164.085848ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.210018  433557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:43.220338  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:45.718170  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:43.224219  433881 crio.go:462] duration metric: took 1.943671791s to copy over tarball
	I0408 12:47:43.224306  433881 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:41.768734  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769194  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.769131  435248 retry.go:31] will retry after 581.590127ms: waiting for machine to come up
	I0408 12:47:42.352156  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.352975  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.353017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:42.352850  435248 retry.go:31] will retry after 673.545679ms: waiting for machine to come up
	I0408 12:47:43.028329  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029066  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029101  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.028956  435248 retry.go:31] will retry after 690.795418ms: waiting for machine to come up
	I0408 12:47:43.721435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.721999  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.722025  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.721948  435248 retry.go:31] will retry after 941.917321ms: waiting for machine to come up
	I0408 12:47:44.665002  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665468  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665495  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:44.665406  435248 retry.go:31] will retry after 1.037587737s: waiting for machine to come up
	I0408 12:47:45.705319  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705792  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705822  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:45.705730  435248 retry.go:31] will retry after 1.287151334s: waiting for machine to come up
	I0408 12:47:46.640995  433674 pod_ready.go:102] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:48.558627  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.558666  433674 pod_ready.go:81] duration metric: took 3.925119514s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.558683  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583378  433674 pod_ready.go:92] pod "kube-proxy-2gn8m" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.583405  433674 pod_ready.go:81] duration metric: took 24.715384ms for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583416  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598937  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.598969  433674 pod_ready.go:81] duration metric: took 15.544342ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598983  433674 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:47.918307  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:50.219513  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:46.621677  433881 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.397321627s)
	I0408 12:47:46.881725  433881 crio.go:469] duration metric: took 3.657463869s to extract the tarball
	I0408 12:47:46.881748  433881 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:46.936087  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:46.980999  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:46.981031  433881 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:46.981086  433881 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.981115  433881 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.981160  433881 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:46.981180  433881 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.981197  433881 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.981206  433881 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.981332  433881 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.981525  433881 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.983461  433881 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983449  433881 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.983481  433881 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.983501  433881 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.983517  433881 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.983495  433881 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.215815  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.218682  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.218812  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:47:47.226057  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.237986  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.249572  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.255059  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.331367  433881 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:47:47.331429  433881 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.331484  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.403757  433881 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:47:47.403846  433881 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.403899  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.408643  433881 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:47:47.408702  433881 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:47:47.408755  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443551  433881 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:47:47.443589  433881 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:47:47.443609  433881 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.443626  433881 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.443678  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443682  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453637  433881 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:47:47.453695  433881 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.453749  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453825  433881 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:47:47.453864  433881 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.453884  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.453908  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453990  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.454014  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:47:47.456910  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.457446  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.569243  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:47:47.569295  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.569320  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.583668  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:47:47.583967  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:47:47.589545  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:47:47.589707  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:47:47.638036  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:47:47.639955  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:47:47.860567  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:48.010273  433881 cache_images.go:92] duration metric: took 1.029223281s to LoadCachedImages
	W0408 12:47:48.010419  433881 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0408 12:47:48.010440  433881 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.20.0 crio true true} ...
	I0408 12:47:48.010631  433881 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-384148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:48.010729  433881 ssh_runner.go:195] Run: crio config
	I0408 12:47:48.065431  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:47:48.065461  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:48.065478  433881 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:48.065504  433881 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-384148 NodeName:old-k8s-version-384148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:47:48.065684  433881 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-384148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:48.065779  433881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:47:48.080840  433881 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:48.080950  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:48.094581  433881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 12:47:48.117392  433881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:48.138262  433881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:47:48.165039  433881 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:48.171191  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:48.189417  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:48.341553  433881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:48.363215  433881 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148 for IP: 192.168.39.245
	I0408 12:47:48.363249  433881 certs.go:194] generating shared ca certs ...
	I0408 12:47:48.363272  433881 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:48.363473  433881 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:48.363571  433881 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:48.363589  433881 certs.go:256] generating profile certs ...
	I0408 12:47:48.426881  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key
	I0408 12:47:48.427040  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1
	I0408 12:47:48.427110  433881 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key
	I0408 12:47:48.427261  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:48.427310  433881 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:48.427321  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:48.427354  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:48.427422  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:48.427462  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:48.427523  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:48.428524  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:48.476520  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:48.522452  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:48.561710  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:48.607052  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 12:47:48.651541  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:48.704207  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:48.742684  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:48.772703  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:48.803476  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:48.833154  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:48.863183  433881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:48.885940  433881 ssh_runner.go:195] Run: openssl version
	I0408 12:47:48.894847  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:48.910969  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916386  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916449  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.923008  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:48.936122  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:48.952344  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957735  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957815  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.964720  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:48.978862  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:48.993113  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998835  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998906  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:49.005710  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:49.019197  433881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:49.024728  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:49.031831  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:49.038736  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:49.045946  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:49.053040  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:49.060064  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:49.066969  433881 kubeadm.go:391] StartCluster: {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:49.067090  433881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:49.067156  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.107266  433881 cri.go:89] found id: ""
	I0408 12:47:49.107336  433881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:49.120092  433881 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:49.120126  433881 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:49.120132  433881 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:49.120190  433881 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:49.133500  433881 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:49.134686  433881 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:49.135619  433881 kubeconfig.go:62] /home/jenkins/minikube-integration/18588-368424/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-384148" cluster setting kubeconfig missing "old-k8s-version-384148" context setting]
	I0408 12:47:49.136897  433881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:49.139048  433881 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:49.154878  433881 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.245
	I0408 12:47:49.154925  433881 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:49.154941  433881 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:49.155009  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.207364  433881 cri.go:89] found id: ""
	I0408 12:47:49.207445  433881 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:49.228390  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:49.245160  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:49.245193  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:49.245266  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:49.256832  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:49.256913  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:49.268773  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:49.282821  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:49.282898  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:49.297896  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.312075  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:49.312158  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.327398  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:49.341467  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:49.341604  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:49.354096  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:49.366717  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:49.514951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.442724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.716276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.833506  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.927655  433881 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:50.927798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:51.428588  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:46.994162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994640  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994672  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:46.994593  435248 retry.go:31] will retry after 1.863771905s: waiting for machine to come up
	I0408 12:47:48.860673  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861257  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:48.861151  435248 retry.go:31] will retry after 2.204894376s: waiting for machine to come up
	I0408 12:47:51.067423  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067909  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067937  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:51.067864  435248 retry.go:31] will retry after 2.625423179s: waiting for machine to come up
	I0408 12:47:50.608007  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:53.108084  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:52.717545  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:55.218944  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:51.928035  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.427844  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.928718  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.927869  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.428707  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.928798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.427884  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.928273  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:56.427941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.695295  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695826  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695862  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:53.695772  435248 retry.go:31] will retry after 4.111917473s: waiting for machine to come up
	I0408 12:47:55.606909  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:58.111708  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:57.717559  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:59.718066  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:56.927927  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.428068  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.928800  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.427871  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.927822  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.428740  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.927924  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.427948  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.928792  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:01.428657  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.809179  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809697  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809729  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:57.809632  435248 retry.go:31] will retry after 4.27502806s: waiting for machine to come up
	I0408 12:48:02.086033  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086558  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has current primary IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086586  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Found IP for machine: 192.168.50.7
	I0408 12:48:02.086603  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserving static IP address...
	I0408 12:48:02.087069  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.087105  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserved static IP address: 192.168.50.7
	I0408 12:48:02.087137  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | skip adding static IP to network mk-default-k8s-diff-port-527454 - found existing host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"}
	I0408 12:48:02.087158  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Getting to WaitForSSH function...
	I0408 12:48:02.087177  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for SSH to be available...
	I0408 12:48:02.089228  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089585  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.089608  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH client type: external
	I0408 12:48:02.089840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa (-rw-------)
	I0408 12:48:02.089885  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:48:02.089900  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | About to run SSH command:
	I0408 12:48:02.089917  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | exit 0
	I0408 12:48:02.216245  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | SSH cmd err, output: <nil>: 
	I0408 12:48:02.216684  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetConfigRaw
	I0408 12:48:02.217582  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.220543  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.220961  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.220995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.221282  433439 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/config.json ...
	I0408 12:48:02.221480  433439 machine.go:94] provisionDockerMachine start ...
	I0408 12:48:02.221499  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:02.221738  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.224371  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.224770  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.224802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.225007  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.225236  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225399  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225548  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.225740  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.225957  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.225970  433439 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:48:02.336716  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:48:02.336754  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337074  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:48:02.337108  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337351  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.340133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340539  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.340583  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340653  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.340842  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341016  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341171  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.341346  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.341539  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.341556  433439 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-527454 && echo "default-k8s-diff-port-527454" | sudo tee /etc/hostname
	I0408 12:48:02.464462  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-527454
	
	I0408 12:48:02.464507  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.467682  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468082  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.468118  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468335  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.468595  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468782  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468954  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.469154  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.469372  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.469392  433439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-527454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-527454/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-527454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:48:02.593971  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:48:02.594006  433439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:48:02.594061  433439 buildroot.go:174] setting up certificates
	I0408 12:48:02.594078  433439 provision.go:84] configureAuth start
	I0408 12:48:02.594092  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.594431  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.597587  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.598043  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.600898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601267  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.601299  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601497  433439 provision.go:143] copyHostCerts
	I0408 12:48:02.601562  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:48:02.601588  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:48:02.601653  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:48:02.601841  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:48:02.601857  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:48:02.601888  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:48:02.601966  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:48:02.601981  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:48:02.602010  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:48:02.602088  433439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-527454 san=[127.0.0.1 192.168.50.7 default-k8s-diff-port-527454 localhost minikube]
	I0408 12:48:02.845116  433439 provision.go:177] copyRemoteCerts
	I0408 12:48:02.845190  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:48:02.845217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.848054  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.848406  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848559  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.848817  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.848986  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.849125  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:02.934223  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:48:02.962726  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0408 12:48:02.992767  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:48:03.021973  433439 provision.go:87] duration metric: took 427.87874ms to configureAuth
	I0408 12:48:03.022009  433439 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:48:03.022270  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:48:03.022382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.025407  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025765  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.025802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025959  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.026215  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026379  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026510  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.026659  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.026834  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.026856  433439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:48:03.310263  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:48:03.310307  433439 machine.go:97] duration metric: took 1.088813603s to provisionDockerMachine
	I0408 12:48:03.310323  433439 start.go:293] postStartSetup for "default-k8s-diff-port-527454" (driver="kvm2")
	I0408 12:48:03.310337  433439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:48:03.310362  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.310758  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:48:03.310799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.313533  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.313968  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.314001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.314201  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.314375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.314584  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.314760  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.400087  433439 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:48:03.405240  433439 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:48:03.405272  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:48:03.405351  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:48:03.405450  433439 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:48:03.405570  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:48:03.415947  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:03.448935  433439 start.go:296] duration metric: took 138.593583ms for postStartSetup
	I0408 12:48:03.449025  433439 fix.go:56] duration metric: took 23.946779964s for fixHost
	I0408 12:48:03.449055  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.452026  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452392  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.452435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452630  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.452844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453063  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453248  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.453420  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.453604  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.453615  433439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:48:03.565710  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580483.551031252
	
	I0408 12:48:03.565738  433439 fix.go:216] guest clock: 1712580483.551031252
	I0408 12:48:03.565750  433439 fix.go:229] Guest: 2024-04-08 12:48:03.551031252 +0000 UTC Remote: 2024-04-08 12:48:03.44903588 +0000 UTC m=+361.760256784 (delta=101.995372ms)
	I0408 12:48:03.565777  433439 fix.go:200] guest clock delta is within tolerance: 101.995372ms
	I0408 12:48:03.565787  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 24.063582343s
	I0408 12:48:03.565806  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.566106  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:03.569409  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.569776  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.569814  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.570017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570577  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570831  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570952  433439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:48:03.571021  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.571121  433439 ssh_runner.go:195] Run: cat /version.json
	I0408 12:48:03.571146  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.573939  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574167  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574300  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574333  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574469  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574594  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574621  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574674  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.574757  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574871  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.574957  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.575130  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.575441  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.575590  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.695930  433439 ssh_runner.go:195] Run: systemctl --version
	I0408 12:48:03.702915  433439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:48:03.853737  433439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:48:03.860218  433439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:48:03.860287  433439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:48:03.877827  433439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:48:03.877861  433439 start.go:494] detecting cgroup driver to use...
	I0408 12:48:03.877943  433439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:48:03.897232  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:48:03.913028  433439 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:48:03.913112  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:48:03.929574  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:48:03.946880  433439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:48:04.083524  433439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:48:04.243842  433439 docker.go:233] disabling docker service ...
	I0408 12:48:04.243938  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:48:04.260459  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:48:04.276119  433439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:48:04.428999  433439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:48:04.571431  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:48:04.589661  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:48:04.612872  433439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:48:04.612954  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.625841  433439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:48:04.625939  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.638868  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.652106  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.664883  433439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:48:04.678149  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.691069  433439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.711329  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.725917  433439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:48:04.738875  433439 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:48:04.738941  433439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:48:04.756784  433439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:48:04.769852  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:04.895658  433439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:48:05.056165  433439 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:48:05.056270  433439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:48:05.061838  433439 start.go:562] Will wait 60s for crictl version
	I0408 12:48:05.061918  433439 ssh_runner.go:195] Run: which crictl
	I0408 12:48:05.066280  433439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:48:05.110966  433439 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:48:05.111084  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.142272  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.176138  433439 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:48:00.606508  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:03.107018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:05.109926  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:02.220836  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:04.718465  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:01.928628  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.427857  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.927917  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.428824  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.428084  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.928751  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.428193  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.927854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.427836  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.177382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:05.180028  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180334  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:05.180363  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180635  433439 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 12:48:05.185436  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:05.199001  433439 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:48:05.199130  433439 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:48:05.199174  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:05.239255  433439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:48:05.239358  433439 ssh_runner.go:195] Run: which lz4
	I0408 12:48:05.244115  433439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:48:05.249135  433439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:48:05.249169  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:48:07.606284  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.607161  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.720025  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.219059  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.928222  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.427868  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.927863  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.428510  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.928662  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.427932  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.928613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.928934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.428085  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.889921  433439 crio.go:462] duration metric: took 1.645848876s to copy over tarball
	I0408 12:48:06.890006  433439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:48:09.403589  433439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513555281s)
	I0408 12:48:09.403620  433439 crio.go:469] duration metric: took 2.513669951s to extract the tarball
	I0408 12:48:09.403627  433439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:48:09.446487  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:09.494576  433439 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:48:09.494606  433439 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:48:09.494614  433439 kubeadm.go:928] updating node { 192.168.50.7 8444 v1.29.3 crio true true} ...
	I0408 12:48:09.494822  433439 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-527454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:48:09.494917  433439 ssh_runner.go:195] Run: crio config
	I0408 12:48:09.541809  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:09.541839  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:09.541859  433439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:48:09.541887  433439 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-527454 NodeName:default-k8s-diff-port-527454 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:48:09.542105  433439 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-527454"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:48:09.542201  433439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:48:09.553494  433439 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:48:09.553591  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:48:09.564970  433439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0408 12:48:09.584888  433439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:48:09.604538  433439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0408 12:48:09.623993  433439 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0408 12:48:09.628368  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:09.642170  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:09.789791  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:48:09.808943  433439 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454 for IP: 192.168.50.7
	I0408 12:48:09.808972  433439 certs.go:194] generating shared ca certs ...
	I0408 12:48:09.808995  433439 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:48:09.809194  433439 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:48:09.809242  433439 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:48:09.809253  433439 certs.go:256] generating profile certs ...
	I0408 12:48:09.809344  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/client.key
	I0408 12:48:09.809415  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key.ad1d04eb
	I0408 12:48:09.809457  433439 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key
	I0408 12:48:09.809645  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:48:09.809699  433439 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:48:09.809713  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:48:09.809742  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:48:09.809764  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:48:09.809792  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:48:09.809851  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:09.810516  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:48:09.866085  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:48:09.899718  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:48:09.941704  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:48:09.976180  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0408 12:48:10.014420  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:48:10.044380  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:48:10.072034  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:48:10.099417  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:48:10.126143  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:48:10.154244  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:48:10.183954  433439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:48:10.207277  433439 ssh_runner.go:195] Run: openssl version
	I0408 12:48:10.213691  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:48:10.228406  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233736  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233798  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.240236  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:48:10.253382  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:48:10.267783  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273234  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273318  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.279925  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:48:10.292710  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:48:10.305381  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310629  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310703  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.317063  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:48:10.330320  433439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:48:10.336138  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:48:10.343341  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:48:10.350536  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:48:10.357665  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:48:10.364925  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:48:10.372314  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:48:10.380001  433439 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:48:10.380107  433439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:48:10.380174  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.425378  433439 cri.go:89] found id: ""
	I0408 12:48:10.425475  433439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:48:10.438972  433439 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:48:10.439000  433439 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:48:10.439005  433439 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:48:10.439051  433439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:48:10.452072  433439 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:48:10.453410  433439 kubeconfig.go:125] found "default-k8s-diff-port-527454" server: "https://192.168.50.7:8444"
	I0408 12:48:10.456022  433439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:48:10.469116  433439 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0408 12:48:10.469171  433439 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:48:10.469188  433439 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:48:10.469256  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.517874  433439 cri.go:89] found id: ""
	I0408 12:48:10.517969  433439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:48:10.538088  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:48:10.551560  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:48:10.551580  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:48:10.551636  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:48:10.564123  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:48:10.564209  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:48:10.578691  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:48:10.590692  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:48:10.590765  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:48:10.602902  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.616831  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:48:10.616922  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.629213  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:48:10.641625  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:48:10.641709  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:48:10.653162  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:48:10.665261  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:10.811712  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.107002  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.606976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:12.188805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.221750  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:11.928656  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.427975  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.927923  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.428494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.928608  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.427852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.927874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.427855  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:16.427929  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.901885  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.09013292s)
	I0408 12:48:11.975836  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.237051  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.329550  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.460345  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:48:12.460457  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.961443  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.460681  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.520828  433439 api_server.go:72] duration metric: took 1.060470201s to wait for apiserver process to appear ...
	I0408 12:48:13.520866  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:48:13.520899  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:13.521407  433439 api_server.go:269] stopped: https://192.168.50.7:8444/healthz: Get "https://192.168.50.7:8444/healthz": dial tcp 192.168.50.7:8444: connect: connection refused
	I0408 12:48:14.022007  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.564485  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.564526  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:16.564543  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.617870  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.617904  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:17.021290  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.026545  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.026578  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:17.521124  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.529552  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.529596  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:18.021125  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:18.037000  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:48:18.049656  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:48:18.049699  433439 api_server.go:131] duration metric: took 4.528823991s to wait for apiserver health ...
	I0408 12:48:18.049722  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:18.049730  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:18.051495  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:48:16.607222  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:18.607837  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.717612  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:19.217050  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.928269  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.427867  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.428658  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.928649  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.428746  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.928734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.427874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.927842  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:21.427823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.052916  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:48:18.072115  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:48:18.111408  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:48:18.130585  433439 system_pods.go:59] 8 kube-system pods found
	I0408 12:48:18.130629  433439 system_pods.go:61] "coredns-76f75df574-r99kj" [171e271b-eec6-4238-afb1-82a2f228c225] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:48:18.130641  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [7019f1eb-58ef-4b1f-acf3-ed3c1ed84623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:48:18.130651  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [80ccd16d-d883-4c92-bb13-abe2d412532c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:48:18.130661  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [78d513aa-1f24-42c0-bfb9-4c20fdee63f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:48:18.130669  433439 system_pods.go:61] "kube-proxy-ztmmc" [de09a26e-cd95-401a-b575-977fcd660c47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 12:48:18.130683  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [eac4d549-1763-45b8-be11-b3b9e83f5110] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:48:18.130702  433439 system_pods.go:61] "metrics-server-57f55c9bc5-44qbm" [52631fc6-84d0-443b-ba42-de35a65db0f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:48:18.130714  433439 system_pods.go:61] "storage-provisioner" [82e8b0d0-6c22-4644-8bd1-b48887b0fe82] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 12:48:18.130730  433439 system_pods.go:74] duration metric: took 19.293309ms to wait for pod list to return data ...
	I0408 12:48:18.130745  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:48:18.135625  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:48:18.135663  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:48:18.135679  433439 node_conditions.go:105] duration metric: took 4.924641ms to run NodePressure ...
	I0408 12:48:18.135724  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:18.416272  433439 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424302  433439 kubeadm.go:733] kubelet initialised
	I0408 12:48:18.424325  433439 kubeadm.go:734] duration metric: took 8.015642ms waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424342  433439 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:48:18.436706  433439 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.447063  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447102  433439 pod_ready.go:81] duration metric: took 10.361708ms for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.447116  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447126  433439 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.460464  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460496  433439 pod_ready.go:81] duration metric: took 13.357612ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.460513  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460523  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.469991  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470035  433439 pod_ready.go:81] duration metric: took 9.502493ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.470072  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470083  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.516886  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516920  433439 pod_ready.go:81] duration metric: took 46.823396ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.516933  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516940  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915101  433439 pod_ready.go:92] pod "kube-proxy-ztmmc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:18.915131  433439 pod_ready.go:81] duration metric: took 398.182437ms for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915144  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:20.922456  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.107083  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.108249  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.219995  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.718091  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.928654  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.428887  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.928103  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.428482  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.928236  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.428613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.928054  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.428566  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.927852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.428729  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.922607  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:24.922155  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:24.922185  433439 pod_ready.go:81] duration metric: took 6.007031338s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:24.922200  433439 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:25.607653  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.216429  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.218553  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.717516  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.427853  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.928281  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.428354  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.928419  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.427934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.427840  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.931412  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:29.430930  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.608369  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:33.107424  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:32.717551  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.216256  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:31.928618  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.928067  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.428776  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.928583  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.428774  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.928033  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.428825  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.928696  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.428311  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.931958  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:34.430950  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.607018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.607820  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:40.106361  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.217721  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:39.218016  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:36.928915  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.427831  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.928429  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.428001  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.927802  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.427845  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.928013  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:41.428569  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.929987  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:38.931900  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.429986  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:42.605609  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:44.606744  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.717196  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:43.718405  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.428794  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.927856  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.428217  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.928796  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.428756  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.927829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.428563  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.927812  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:46.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.430411  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:45.932993  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.607058  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.607716  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.216568  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.218325  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.718153  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.928607  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.427829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.928499  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.428241  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.928393  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.428488  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.927941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.428003  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.928815  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:50.928888  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:50.970680  433881 cri.go:89] found id: ""
	I0408 12:48:50.970713  433881 logs.go:276] 0 containers: []
	W0408 12:48:50.970725  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:50.970733  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:50.970799  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:51.009804  433881 cri.go:89] found id: ""
	I0408 12:48:51.009838  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.009848  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:51.009854  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:51.009909  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:51.049581  433881 cri.go:89] found id: ""
	I0408 12:48:51.049617  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.049626  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:51.049633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:51.049706  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:51.086286  433881 cri.go:89] found id: ""
	I0408 12:48:51.086314  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.086323  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:51.086329  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:51.086395  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:51.126888  433881 cri.go:89] found id: ""
	I0408 12:48:51.126916  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.126927  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:51.126935  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:51.126998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:51.168650  433881 cri.go:89] found id: ""
	I0408 12:48:51.168684  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.168695  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:51.168702  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:51.168759  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:51.205661  433881 cri.go:89] found id: ""
	I0408 12:48:51.205693  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.205706  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:51.205714  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:51.205782  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:51.245659  433881 cri.go:89] found id: ""
	I0408 12:48:51.245699  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.245711  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:51.245725  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:51.245742  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:51.310079  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:51.310120  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:51.354093  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:51.354124  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:51.405031  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:51.405074  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:51.421147  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:51.421183  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:51.547658  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:48.430488  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.432250  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:51.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.606447  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.217434  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:55.717265  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.047880  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:54.062872  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:54.062960  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:54.109041  433881 cri.go:89] found id: ""
	I0408 12:48:54.109068  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.109079  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:54.109087  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:54.109209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:54.150194  433881 cri.go:89] found id: ""
	I0408 12:48:54.150223  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.150231  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:54.150237  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:54.150292  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:54.191735  433881 cri.go:89] found id: ""
	I0408 12:48:54.191767  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.191785  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:54.191792  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:54.191872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:54.251766  433881 cri.go:89] found id: ""
	I0408 12:48:54.251798  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.251807  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:54.251813  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:54.251878  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:54.292179  433881 cri.go:89] found id: ""
	I0408 12:48:54.292215  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.292229  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:54.292237  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:54.292311  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:54.329338  433881 cri.go:89] found id: ""
	I0408 12:48:54.329368  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.329380  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:54.329389  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:54.329458  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:54.377094  433881 cri.go:89] found id: ""
	I0408 12:48:54.377132  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.377144  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:54.377153  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:54.377227  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:54.415835  433881 cri.go:89] found id: ""
	I0408 12:48:54.415865  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.415873  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:54.415884  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:54.415896  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:54.471985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:54.472040  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:54.487674  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:54.487727  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:54.575138  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:54.575161  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:54.575176  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:54.647315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:54.647364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:52.928902  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.931253  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:56.106505  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.108187  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.218754  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.718600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:57.189969  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:57.204122  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:57.204201  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:57.241210  433881 cri.go:89] found id: ""
	I0408 12:48:57.241243  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.241252  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:57.241258  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:57.241310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:57.279553  433881 cri.go:89] found id: ""
	I0408 12:48:57.279591  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.279600  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:57.279606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:57.279658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:57.323516  433881 cri.go:89] found id: ""
	I0408 12:48:57.323560  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.323585  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:57.323593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:57.323663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:57.363723  433881 cri.go:89] found id: ""
	I0408 12:48:57.363755  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.363766  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:57.363772  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:57.363839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:57.400144  433881 cri.go:89] found id: ""
	I0408 12:48:57.400178  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.400190  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:57.400208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:57.400274  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:57.441875  433881 cri.go:89] found id: ""
	I0408 12:48:57.441907  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.441919  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:57.441928  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:57.441999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:57.478024  433881 cri.go:89] found id: ""
	I0408 12:48:57.478057  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.478066  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:57.478074  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:57.478144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:57.516602  433881 cri.go:89] found id: ""
	I0408 12:48:57.516633  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.516642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:57.516652  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:57.516666  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:57.573832  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:57.573883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:57.590751  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:57.590793  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:57.670650  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:57.670679  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:57.670698  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:57.746440  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:57.746488  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:00.291359  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:00.306024  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:00.306116  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:00.352262  433881 cri.go:89] found id: ""
	I0408 12:49:00.352294  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.352305  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:00.352314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:00.352390  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:00.392371  433881 cri.go:89] found id: ""
	I0408 12:49:00.392403  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.392415  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:00.392423  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:00.392488  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:00.434848  433881 cri.go:89] found id: ""
	I0408 12:49:00.434876  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.434885  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:00.434892  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:00.434951  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:00.476998  433881 cri.go:89] found id: ""
	I0408 12:49:00.477032  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.477045  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:00.477054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:00.477128  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:00.514520  433881 cri.go:89] found id: ""
	I0408 12:49:00.514560  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.514569  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:00.514575  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:00.514643  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:00.555942  433881 cri.go:89] found id: ""
	I0408 12:49:00.555981  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.555996  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:00.556005  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:00.556074  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:00.603944  433881 cri.go:89] found id: ""
	I0408 12:49:00.604053  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.604079  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:00.604097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:00.604193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:00.660591  433881 cri.go:89] found id: ""
	I0408 12:49:00.660628  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.660642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:00.660655  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:00.660677  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:00.731774  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:00.731821  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:00.747891  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:00.747947  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:00.827051  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:00.827085  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:00.827100  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:00.907231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:00.907280  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:57.431032  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:59.930470  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.608450  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.106647  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.218064  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.460014  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:03.474615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:03.474716  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:03.513072  433881 cri.go:89] found id: ""
	I0408 12:49:03.513106  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.513115  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:03.513122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:03.513179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:03.549307  433881 cri.go:89] found id: ""
	I0408 12:49:03.549349  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.549358  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:03.549364  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:03.549508  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:03.587463  433881 cri.go:89] found id: ""
	I0408 12:49:03.587503  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.587516  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:03.587524  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:03.587601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:03.628171  433881 cri.go:89] found id: ""
	I0408 12:49:03.628202  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.628211  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:03.628217  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:03.628284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:03.663630  433881 cri.go:89] found id: ""
	I0408 12:49:03.663661  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.663672  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:03.663680  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:03.663762  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:03.704078  433881 cri.go:89] found id: ""
	I0408 12:49:03.704112  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.704124  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:03.704134  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:03.704202  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:03.744820  433881 cri.go:89] found id: ""
	I0408 12:49:03.744856  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.744868  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:03.744877  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:03.744945  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:03.785826  433881 cri.go:89] found id: ""
	I0408 12:49:03.785855  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.785868  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:03.785878  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:03.785905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:03.800987  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:03.801019  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:03.882870  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:03.882905  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:03.882924  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:03.967335  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:03.967382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:04.008319  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:04.008348  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:06.562156  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:06.579058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:06.579137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:01.933210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:04.428894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.428974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.606895  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:08.106819  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:07.718023  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.635302  433881 cri.go:89] found id: ""
	I0408 12:49:06.635333  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.635345  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:06.635353  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:06.635422  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:06.696626  433881 cri.go:89] found id: ""
	I0408 12:49:06.696675  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.696692  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:06.696700  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:06.696769  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:06.738555  433881 cri.go:89] found id: ""
	I0408 12:49:06.738589  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.738601  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:06.738610  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:06.738675  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:06.780471  433881 cri.go:89] found id: ""
	I0408 12:49:06.780507  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.780516  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:06.780522  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:06.780587  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:06.823514  433881 cri.go:89] found id: ""
	I0408 12:49:06.823558  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.823571  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:06.823580  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:06.823671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:06.863990  433881 cri.go:89] found id: ""
	I0408 12:49:06.864029  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.864045  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:06.864055  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:06.864123  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:06.905383  433881 cri.go:89] found id: ""
	I0408 12:49:06.905419  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.905432  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:06.905440  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:06.905510  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:06.947761  433881 cri.go:89] found id: ""
	I0408 12:49:06.947792  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.947805  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:06.947814  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:06.947826  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:06.988895  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:06.988930  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:07.043205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:07.043251  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:07.057788  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:07.057823  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:07.137854  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:07.137884  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:07.137903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:09.724678  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:09.739337  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:09.739408  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:09.777803  433881 cri.go:89] found id: ""
	I0408 12:49:09.777837  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.777848  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:09.777857  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:09.777934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:09.818101  433881 cri.go:89] found id: ""
	I0408 12:49:09.818132  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.818144  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:09.818152  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:09.818220  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:09.860148  433881 cri.go:89] found id: ""
	I0408 12:49:09.860186  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.860211  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:09.860218  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:09.860284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:09.899008  433881 cri.go:89] found id: ""
	I0408 12:49:09.899042  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.899054  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:09.899063  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:09.899130  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:09.938235  433881 cri.go:89] found id: ""
	I0408 12:49:09.938270  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.938281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:09.938290  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:09.938361  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:09.977404  433881 cri.go:89] found id: ""
	I0408 12:49:09.977438  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.977447  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:09.977454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:09.977505  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:10.015959  433881 cri.go:89] found id: ""
	I0408 12:49:10.015992  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.016008  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:10.016015  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:10.016083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:10.055723  433881 cri.go:89] found id: ""
	I0408 12:49:10.055753  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.055762  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:10.055771  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:10.055785  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:10.131028  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:10.131061  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:10.131079  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:10.213484  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:10.213528  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:10.261403  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:10.261554  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:10.316130  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:10.316189  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:08.429894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.930925  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.609607  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:13.106296  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.716182  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.717779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.832344  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:12.846324  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:12.846446  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:12.883721  433881 cri.go:89] found id: ""
	I0408 12:49:12.883761  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.883776  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:12.883784  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:12.883850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:12.922869  433881 cri.go:89] found id: ""
	I0408 12:49:12.922903  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.922914  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:12.922923  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:12.922989  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:12.965672  433881 cri.go:89] found id: ""
	I0408 12:49:12.965711  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.965723  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:12.965731  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:12.965804  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:13.005430  433881 cri.go:89] found id: ""
	I0408 12:49:13.005466  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.005479  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:13.005494  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:13.005556  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:13.047068  433881 cri.go:89] found id: ""
	I0408 12:49:13.047095  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.047103  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:13.047110  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:13.047175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:13.085014  433881 cri.go:89] found id: ""
	I0408 12:49:13.085047  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.085058  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:13.085067  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:13.085134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:13.122582  433881 cri.go:89] found id: ""
	I0408 12:49:13.122621  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.122633  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:13.122643  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:13.122707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:13.159159  433881 cri.go:89] found id: ""
	I0408 12:49:13.159190  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.159199  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:13.159209  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:13.159221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:13.211508  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:13.211553  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:13.228228  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:13.228265  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:13.306379  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:13.306419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:13.306437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:13.383403  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:13.383462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:15.933673  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:15.947963  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:15.948039  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:15.988497  433881 cri.go:89] found id: ""
	I0408 12:49:15.988526  433881 logs.go:276] 0 containers: []
	W0408 12:49:15.988534  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:15.988541  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:15.988605  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:16.026695  433881 cri.go:89] found id: ""
	I0408 12:49:16.026733  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.026758  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:16.026766  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:16.026850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:16.072415  433881 cri.go:89] found id: ""
	I0408 12:49:16.072452  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.072487  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:16.072498  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:16.072576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:16.111534  433881 cri.go:89] found id: ""
	I0408 12:49:16.111563  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.111575  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:16.111583  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:16.111653  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:16.151515  433881 cri.go:89] found id: ""
	I0408 12:49:16.151550  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.151562  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:16.151572  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:16.151640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:16.189055  433881 cri.go:89] found id: ""
	I0408 12:49:16.189085  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.189094  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:16.189101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:16.189153  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:16.226759  433881 cri.go:89] found id: ""
	I0408 12:49:16.226790  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.226800  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:16.226807  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:16.226860  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:16.269035  433881 cri.go:89] found id: ""
	I0408 12:49:16.269068  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.269079  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:16.269092  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:16.269110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:16.322426  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:16.322472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:16.337670  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:16.337704  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:16.422746  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:16.422777  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:16.422795  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:16.508089  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:16.508140  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:12.931911  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.933011  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:15.607174  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:18.106346  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:17.216822  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.216874  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.055162  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:19.069970  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:19.070044  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:19.110031  433881 cri.go:89] found id: ""
	I0408 12:49:19.110062  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.110070  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:19.110077  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:19.110125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:19.150644  433881 cri.go:89] found id: ""
	I0408 12:49:19.150681  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.150693  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:19.150702  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:19.150770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:19.193032  433881 cri.go:89] found id: ""
	I0408 12:49:19.193064  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.193076  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:19.193084  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:19.193157  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:19.230634  433881 cri.go:89] found id: ""
	I0408 12:49:19.230661  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.230670  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:19.230676  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:19.230727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:19.269083  433881 cri.go:89] found id: ""
	I0408 12:49:19.269116  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.269125  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:19.269132  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:19.269183  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:19.309072  433881 cri.go:89] found id: ""
	I0408 12:49:19.309105  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.309117  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:19.309126  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:19.309208  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:19.349582  433881 cri.go:89] found id: ""
	I0408 12:49:19.349613  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.349622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:19.349633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:19.349687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:19.388015  433881 cri.go:89] found id: ""
	I0408 12:49:19.388046  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.388053  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:19.388062  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:19.388076  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:19.469726  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:19.469750  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:19.469766  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:19.551098  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:19.551138  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:19.595343  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:19.595377  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:19.655983  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:19.656031  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:17.429653  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.432135  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:20.609415  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.105576  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:25.106666  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:21.217932  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.720613  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:22.172109  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:22.187123  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:22.187197  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:22.227242  433881 cri.go:89] found id: ""
	I0408 12:49:22.227269  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.227277  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:22.227283  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:22.227344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:22.266238  433881 cri.go:89] found id: ""
	I0408 12:49:22.266270  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.266279  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:22.266285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:22.266345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:22.304245  433881 cri.go:89] found id: ""
	I0408 12:49:22.304273  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.304281  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:22.304288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:22.304344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:22.348994  433881 cri.go:89] found id: ""
	I0408 12:49:22.349035  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.349048  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:22.349058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:22.349134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:22.389590  433881 cri.go:89] found id: ""
	I0408 12:49:22.389622  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.389631  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:22.389638  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:22.389708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:22.425775  433881 cri.go:89] found id: ""
	I0408 12:49:22.425809  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.425821  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:22.425830  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:22.425898  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:22.468155  433881 cri.go:89] found id: ""
	I0408 12:49:22.468184  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.468192  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:22.468198  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:22.468250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:22.507866  433881 cri.go:89] found id: ""
	I0408 12:49:22.507906  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.507915  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:22.507934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:22.507953  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:22.559847  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:22.559893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:22.575153  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:22.575188  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:22.656324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:22.656354  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:22.656372  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:22.737542  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:22.737589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.282655  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:25.296701  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:25.296770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:25.337101  433881 cri.go:89] found id: ""
	I0408 12:49:25.337141  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.337152  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:25.337161  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:25.337228  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:25.376383  433881 cri.go:89] found id: ""
	I0408 12:49:25.376453  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.376467  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:25.376481  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:25.376576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:25.415819  433881 cri.go:89] found id: ""
	I0408 12:49:25.415852  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.415865  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:25.415873  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:25.415941  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:25.457500  433881 cri.go:89] found id: ""
	I0408 12:49:25.457549  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.457560  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:25.457568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:25.457652  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:25.497132  433881 cri.go:89] found id: ""
	I0408 12:49:25.497172  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.497185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:25.497194  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:25.497265  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:25.542721  433881 cri.go:89] found id: ""
	I0408 12:49:25.542754  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.542765  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:25.542773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:25.542842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:25.583815  433881 cri.go:89] found id: ""
	I0408 12:49:25.583858  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.583869  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:25.583876  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:25.583931  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:25.623484  433881 cri.go:89] found id: ""
	I0408 12:49:25.623519  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.623530  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:25.623544  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:25.623562  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.674250  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:25.674286  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:25.735433  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:25.735477  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:25.750760  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:25.750792  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:25.830122  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:25.830158  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:25.830192  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:21.929027  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.933879  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.429452  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:27.106798  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:29.605690  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.216525  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.216788  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.217600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.418059  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:28.434568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:28.434627  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.479914  433881 cri.go:89] found id: ""
	I0408 12:49:28.479956  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.479968  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:28.479977  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:28.480052  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:28.526249  433881 cri.go:89] found id: ""
	I0408 12:49:28.526282  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.526305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:28.526314  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:28.526403  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:28.564561  433881 cri.go:89] found id: ""
	I0408 12:49:28.564595  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.564606  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:28.564613  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:28.564666  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:28.606416  433881 cri.go:89] found id: ""
	I0408 12:49:28.606456  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.606469  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:28.606478  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:28.606545  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:28.649847  433881 cri.go:89] found id: ""
	I0408 12:49:28.649880  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.649915  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:28.649925  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:28.650014  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:28.690548  433881 cri.go:89] found id: ""
	I0408 12:49:28.690587  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.690600  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:28.690609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:28.690681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:28.730123  433881 cri.go:89] found id: ""
	I0408 12:49:28.730159  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.730170  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:28.730179  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:28.730250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:28.771147  433881 cri.go:89] found id: ""
	I0408 12:49:28.771192  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.771205  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:28.771220  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:28.771238  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:28.856250  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:28.856273  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:28.856301  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:28.941925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:28.941982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:29.003853  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:29.003893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:29.057957  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:29.058004  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.573734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:31.588485  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:31.588551  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.433974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.930607  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.606729  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.107220  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:32.218719  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.718165  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.625072  433881 cri.go:89] found id: ""
	I0408 12:49:31.625100  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.625108  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:31.625114  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:31.625175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:31.662716  433881 cri.go:89] found id: ""
	I0408 12:49:31.662752  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.662763  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:31.662772  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:31.662839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:31.701551  433881 cri.go:89] found id: ""
	I0408 12:49:31.701588  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.701596  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:31.701603  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:31.701687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:31.741857  433881 cri.go:89] found id: ""
	I0408 12:49:31.741888  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.741900  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:31.741908  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:31.741973  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:31.782209  433881 cri.go:89] found id: ""
	I0408 12:49:31.782240  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.782252  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:31.782259  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:31.782347  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:31.820207  433881 cri.go:89] found id: ""
	I0408 12:49:31.820261  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.820283  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:31.820297  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:31.820362  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:31.858445  433881 cri.go:89] found id: ""
	I0408 12:49:31.858482  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.858495  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:31.858504  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:31.858580  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:31.899017  433881 cri.go:89] found id: ""
	I0408 12:49:31.899052  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.899070  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:31.899084  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:31.899102  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:31.956200  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:31.956239  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.971940  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:31.971982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:32.049548  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:32.049578  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:32.049596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:32.136320  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:32.136366  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:34.684997  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:34.700097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:34.700185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:34.757577  433881 cri.go:89] found id: ""
	I0408 12:49:34.757669  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.757686  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:34.757696  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:34.757792  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:34.798151  433881 cri.go:89] found id: ""
	I0408 12:49:34.798188  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.798196  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:34.798203  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:34.798266  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:34.835735  433881 cri.go:89] found id: ""
	I0408 12:49:34.835774  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.835786  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:34.835794  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:34.835862  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:34.875311  433881 cri.go:89] found id: ""
	I0408 12:49:34.875345  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.875359  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:34.875368  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:34.875484  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:34.916118  433881 cri.go:89] found id: ""
	I0408 12:49:34.916148  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.916159  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:34.916167  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:34.916233  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:34.961197  433881 cri.go:89] found id: ""
	I0408 12:49:34.961234  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.961246  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:34.961254  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:34.961314  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:34.999553  433881 cri.go:89] found id: ""
	I0408 12:49:34.999590  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.999598  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:34.999606  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:34.999671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:35.038204  433881 cri.go:89] found id: ""
	I0408 12:49:35.038244  433881 logs.go:276] 0 containers: []
	W0408 12:49:35.038254  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:35.038265  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:35.038277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:35.118925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:35.118982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:35.164584  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:35.164631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:35.216654  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:35.216694  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:35.232506  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:35.232544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:35.304615  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:33.429854  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:35.933211  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:36.605433  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:38.606014  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.217818  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:39.717250  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.805529  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:37.821463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:37.821550  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:37.860644  433881 cri.go:89] found id: ""
	I0408 12:49:37.860683  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.860700  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:37.860709  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:37.860781  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:37.899995  433881 cri.go:89] found id: ""
	I0408 12:49:37.900034  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.900042  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:37.900048  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:37.900111  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:37.939562  433881 cri.go:89] found id: ""
	I0408 12:49:37.939584  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.939592  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:37.939599  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:37.939668  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:37.977990  433881 cri.go:89] found id: ""
	I0408 12:49:37.978021  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.978033  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:37.978042  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:37.978113  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:38.014506  433881 cri.go:89] found id: ""
	I0408 12:49:38.014537  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.014551  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:38.014559  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:38.014639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:38.049714  433881 cri.go:89] found id: ""
	I0408 12:49:38.049751  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.049764  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:38.049773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:38.049842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:38.089931  433881 cri.go:89] found id: ""
	I0408 12:49:38.089978  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.089987  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:38.089993  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:38.090058  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:38.127674  433881 cri.go:89] found id: ""
	I0408 12:49:38.127715  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.127727  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:38.127738  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:38.127759  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.144170  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:38.144203  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:38.225864  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:38.225885  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:38.225899  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:38.309289  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:38.309334  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:38.351666  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:38.351724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:40.910064  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:40.926264  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:40.926350  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:40.973110  433881 cri.go:89] found id: ""
	I0408 12:49:40.973138  433881 logs.go:276] 0 containers: []
	W0408 12:49:40.973146  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:40.973152  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:40.973209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:41.014643  433881 cri.go:89] found id: ""
	I0408 12:49:41.014675  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.014688  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:41.014696  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:41.014761  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:41.054414  433881 cri.go:89] found id: ""
	I0408 12:49:41.054446  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.054461  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:41.054469  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:41.054543  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:41.094835  433881 cri.go:89] found id: ""
	I0408 12:49:41.094867  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.094876  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:41.094883  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:41.094943  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:41.153654  433881 cri.go:89] found id: ""
	I0408 12:49:41.153684  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.153693  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:41.153699  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:41.153751  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:41.196170  433881 cri.go:89] found id: ""
	I0408 12:49:41.196198  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.196209  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:41.196215  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:41.196277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:41.261374  433881 cri.go:89] found id: ""
	I0408 12:49:41.261412  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.261423  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:41.261432  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:41.261500  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:41.300491  433881 cri.go:89] found id: ""
	I0408 12:49:41.300523  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.300532  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:41.300546  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:41.300559  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:41.373813  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:41.373843  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:41.373860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:41.449773  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:41.449819  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:41.498826  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:41.498862  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:41.552736  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:41.552780  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.431584  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:40.930328  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.106567  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:43.606770  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.718244  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.218855  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.068653  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:44.083655  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:44.083756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:44.124068  433881 cri.go:89] found id: ""
	I0408 12:49:44.124101  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.124113  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:44.124122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:44.124193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:44.160898  433881 cri.go:89] found id: ""
	I0408 12:49:44.160936  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.160950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:44.160958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:44.161032  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:44.196503  433881 cri.go:89] found id: ""
	I0408 12:49:44.196532  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.196540  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:44.196547  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:44.196611  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:44.234604  433881 cri.go:89] found id: ""
	I0408 12:49:44.234644  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.234656  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:44.234664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:44.234720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:44.271243  433881 cri.go:89] found id: ""
	I0408 12:49:44.271283  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.271297  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:44.271306  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:44.271369  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:44.308504  433881 cri.go:89] found id: ""
	I0408 12:49:44.308543  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.308571  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:44.308581  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:44.308644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:44.345662  433881 cri.go:89] found id: ""
	I0408 12:49:44.345703  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.345716  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:44.345725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:44.345786  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:44.384785  433881 cri.go:89] found id: ""
	I0408 12:49:44.384816  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.384826  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:44.384845  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:44.384863  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:44.429253  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:44.429283  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:44.485160  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:44.485201  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:44.502996  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:44.503033  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:44.581921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:44.581946  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:44.581964  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:43.428915  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:45.430859  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.106078  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.108320  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.718065  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.721772  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:47.167101  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:47.183406  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:47.183475  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:47.244266  433881 cri.go:89] found id: ""
	I0408 12:49:47.244295  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.244306  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:47.244314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:47.244379  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:47.285538  433881 cri.go:89] found id: ""
	I0408 12:49:47.285575  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.285588  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:47.285597  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:47.285673  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:47.323634  433881 cri.go:89] found id: ""
	I0408 12:49:47.323670  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.323679  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:47.323707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:47.323791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:47.362737  433881 cri.go:89] found id: ""
	I0408 12:49:47.362774  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.362787  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:47.362795  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:47.362856  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:47.403914  433881 cri.go:89] found id: ""
	I0408 12:49:47.403947  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.403958  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:47.403967  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:47.404035  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:47.445470  433881 cri.go:89] found id: ""
	I0408 12:49:47.445506  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.445521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:47.445530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:47.445598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:47.482633  433881 cri.go:89] found id: ""
	I0408 12:49:47.482669  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.482680  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:47.482689  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:47.482760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:47.521404  433881 cri.go:89] found id: ""
	I0408 12:49:47.521441  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.521456  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:47.521469  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:47.521486  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:47.597247  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:47.597270  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:47.597284  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:47.678765  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:47.678805  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.721463  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:47.721502  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:47.780430  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:47.780472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.295320  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:50.312212  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:50.312293  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:50.355987  433881 cri.go:89] found id: ""
	I0408 12:49:50.356022  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.356034  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:50.356043  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:50.356118  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:50.399662  433881 cri.go:89] found id: ""
	I0408 12:49:50.399714  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.399726  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:50.399735  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:50.399798  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:50.441718  433881 cri.go:89] found id: ""
	I0408 12:49:50.441753  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.441764  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:50.441773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:50.441846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:50.485588  433881 cri.go:89] found id: ""
	I0408 12:49:50.485624  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.485634  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:50.485641  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:50.485703  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:50.524897  433881 cri.go:89] found id: ""
	I0408 12:49:50.524929  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.524937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:50.524943  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:50.524998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:50.561337  433881 cri.go:89] found id: ""
	I0408 12:49:50.561378  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.561388  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:50.561396  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:50.561468  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:50.603052  433881 cri.go:89] found id: ""
	I0408 12:49:50.603082  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.603092  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:50.603101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:50.603169  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:50.643514  433881 cri.go:89] found id: ""
	I0408 12:49:50.643555  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.643566  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:50.643576  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:50.643589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:50.697346  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:50.697388  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.711982  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:50.712015  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:50.796665  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:50.796711  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:50.796731  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:50.873396  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:50.873438  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.432167  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:49.929922  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:50.606575  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.106564  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:51.217123  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.217785  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.217941  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.421458  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:53.435909  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:53.435975  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:53.478018  433881 cri.go:89] found id: ""
	I0408 12:49:53.478052  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.478063  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:53.478072  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:53.478138  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:53.518890  433881 cri.go:89] found id: ""
	I0408 12:49:53.518936  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.518950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:53.518958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:53.519047  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:53.554912  433881 cri.go:89] found id: ""
	I0408 12:49:53.554952  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.554964  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:53.554972  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:53.555042  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:53.592991  433881 cri.go:89] found id: ""
	I0408 12:49:53.593019  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.593028  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:53.593033  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:53.593088  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:53.631215  433881 cri.go:89] found id: ""
	I0408 12:49:53.631255  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.631269  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:53.631277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:53.631351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:53.669189  433881 cri.go:89] found id: ""
	I0408 12:49:53.669228  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.669248  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:53.669258  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:53.669322  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:53.709315  433881 cri.go:89] found id: ""
	I0408 12:49:53.709344  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.709353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:53.709359  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:53.709421  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:53.750869  433881 cri.go:89] found id: ""
	I0408 12:49:53.750910  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.750922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:53.750934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:53.750951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:53.802734  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:53.802782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:53.819509  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:53.819546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:53.888733  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:53.888761  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:53.888782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:53.972408  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:53.972448  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:56.517173  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:56.532357  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:56.532427  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:56.574068  433881 cri.go:89] found id: ""
	I0408 12:49:56.574109  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.574118  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:56.574129  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:56.574276  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:52.429230  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:54.929643  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.607214  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:58.109657  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:57.717805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.219041  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:56.616853  433881 cri.go:89] found id: ""
	I0408 12:49:56.616885  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.616906  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:56.616915  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:56.616988  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:56.659097  433881 cri.go:89] found id: ""
	I0408 12:49:56.659125  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.659133  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:56.659139  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:56.659190  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:56.699222  433881 cri.go:89] found id: ""
	I0408 12:49:56.699262  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.699274  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:56.699283  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:56.699345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:56.747017  433881 cri.go:89] found id: ""
	I0408 12:49:56.747055  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.747068  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:56.747076  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:56.747149  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:56.784988  433881 cri.go:89] found id: ""
	I0408 12:49:56.785028  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.785042  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:56.785058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:56.785126  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:56.830280  433881 cri.go:89] found id: ""
	I0408 12:49:56.830320  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.830332  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:56.830340  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:56.830410  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:56.868643  433881 cri.go:89] found id: ""
	I0408 12:49:56.868678  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.868686  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:56.868697  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:56.868713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:56.922497  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:56.922542  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:56.940550  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:56.940596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:57.018640  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:57.018665  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:57.018680  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.096626  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:57.096681  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:59.638585  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:59.652384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:59.652466  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:59.692778  433881 cri.go:89] found id: ""
	I0408 12:49:59.692823  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.692837  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:59.692846  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:59.692906  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:59.732896  433881 cri.go:89] found id: ""
	I0408 12:49:59.732923  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.732933  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:59.732940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:59.732999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:59.774774  433881 cri.go:89] found id: ""
	I0408 12:49:59.774806  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.774814  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:59.774819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:59.774870  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:59.812919  433881 cri.go:89] found id: ""
	I0408 12:49:59.812959  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.812972  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:59.812980  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:59.813043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:59.848653  433881 cri.go:89] found id: ""
	I0408 12:49:59.848684  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.848695  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:59.848703  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:59.848772  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:59.883495  433881 cri.go:89] found id: ""
	I0408 12:49:59.883525  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.883537  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:59.883546  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:59.883625  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:59.925080  433881 cri.go:89] found id: ""
	I0408 12:49:59.925113  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.925122  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:59.925129  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:59.925182  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:59.967101  433881 cri.go:89] found id: ""
	I0408 12:49:59.967130  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.967142  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:59.967152  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:59.967163  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:00.010507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:00.010546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:00.063139  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:00.063182  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:00.079229  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:00.079266  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:00.155202  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:00.155235  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:00.155253  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.430097  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:59.430226  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.605915  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:03.106990  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.717304  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.717757  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.738934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:02.752509  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:02.752593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:02.791178  433881 cri.go:89] found id: ""
	I0408 12:50:02.791212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.791222  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:02.791229  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:02.791301  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:02.834180  433881 cri.go:89] found id: ""
	I0408 12:50:02.834212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.834225  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:02.834234  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:02.834296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:02.873513  433881 cri.go:89] found id: ""
	I0408 12:50:02.873551  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.873563  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:02.873573  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:02.873651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:02.921329  433881 cri.go:89] found id: ""
	I0408 12:50:02.921371  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.921384  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:02.921392  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:02.921517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:02.959940  433881 cri.go:89] found id: ""
	I0408 12:50:02.959970  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.959980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:02.959988  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:02.960120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:03.001222  433881 cri.go:89] found id: ""
	I0408 12:50:03.001251  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.001259  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:03.001265  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:03.001317  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:03.043627  433881 cri.go:89] found id: ""
	I0408 12:50:03.043656  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.043666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:03.043671  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:03.043750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:03.083603  433881 cri.go:89] found id: ""
	I0408 12:50:03.083640  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.083649  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:03.083660  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:03.083674  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:03.138300  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:03.138343  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:03.153439  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:03.153476  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:03.230230  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:03.230258  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:03.230277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:03.312005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:03.312048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:05.851000  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:05.865533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:05.865601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:05.905449  433881 cri.go:89] found id: ""
	I0408 12:50:05.905485  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.905495  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:05.905501  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:05.905570  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:05.952260  433881 cri.go:89] found id: ""
	I0408 12:50:05.952293  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.952305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:05.952313  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:05.952384  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:05.993398  433881 cri.go:89] found id: ""
	I0408 12:50:05.993430  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.993440  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:05.993446  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:05.993512  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:06.031484  433881 cri.go:89] found id: ""
	I0408 12:50:06.031527  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.031539  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:06.031551  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:06.031613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:06.067855  433881 cri.go:89] found id: ""
	I0408 12:50:06.067897  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.067910  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:06.067920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:06.067992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:06.108905  433881 cri.go:89] found id: ""
	I0408 12:50:06.108937  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.108949  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:06.108958  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:06.109010  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:06.147629  433881 cri.go:89] found id: ""
	I0408 12:50:06.147664  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.147674  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:06.147683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:06.147760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:06.184250  433881 cri.go:89] found id: ""
	I0408 12:50:06.184287  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.184298  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:06.184312  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:06.184329  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:06.239560  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:06.239606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:06.254746  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:06.254777  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:06.330423  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:06.330453  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:06.330471  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:06.410965  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:06.411017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:01.930407  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.429884  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:06.430557  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:05.605804  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.606737  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:10.107370  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.218275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:09.716548  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:08.958108  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:08.972557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:08.972626  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:09.026034  433881 cri.go:89] found id: ""
	I0408 12:50:09.026073  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.026081  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:09.026094  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:09.026145  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:09.063360  433881 cri.go:89] found id: ""
	I0408 12:50:09.063399  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.063411  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:09.063420  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:09.063509  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:09.101002  433881 cri.go:89] found id: ""
	I0408 12:50:09.101030  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.101039  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:09.101045  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:09.101104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:09.140794  433881 cri.go:89] found id: ""
	I0408 12:50:09.140830  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.140843  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:09.140852  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:09.140912  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:09.176889  433881 cri.go:89] found id: ""
	I0408 12:50:09.176927  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.176939  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:09.176947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:09.177013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:09.218687  433881 cri.go:89] found id: ""
	I0408 12:50:09.218719  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.218730  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:09.218739  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:09.218819  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:09.254509  433881 cri.go:89] found id: ""
	I0408 12:50:09.254542  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.254551  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:09.254557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:09.254619  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:09.291313  433881 cri.go:89] found id: ""
	I0408 12:50:09.291341  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.291349  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:09.291359  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:09.291382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:09.342578  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:09.342625  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:09.359207  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:09.359236  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:09.434921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:09.434945  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:09.434962  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:09.526672  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:09.526726  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:08.930029  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.429317  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.107556  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:14.606578  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.717001  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:13.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.719875  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.075428  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:12.089920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:12.089986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:12.128791  433881 cri.go:89] found id: ""
	I0408 12:50:12.128878  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.128895  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:12.128905  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:12.128979  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:12.166911  433881 cri.go:89] found id: ""
	I0408 12:50:12.166939  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.166947  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:12.166954  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:12.167005  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:12.205798  433881 cri.go:89] found id: ""
	I0408 12:50:12.205830  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.205839  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:12.205847  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:12.205905  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:12.242716  433881 cri.go:89] found id: ""
	I0408 12:50:12.242754  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.242764  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:12.242771  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:12.242825  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:12.279061  433881 cri.go:89] found id: ""
	I0408 12:50:12.279098  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.279109  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:12.279118  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:12.279187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:12.319510  433881 cri.go:89] found id: ""
	I0408 12:50:12.319538  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.319547  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:12.319554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:12.319610  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:12.357578  433881 cri.go:89] found id: ""
	I0408 12:50:12.357613  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.357625  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:12.357634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:12.357699  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:12.402895  433881 cri.go:89] found id: ""
	I0408 12:50:12.402931  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.402944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:12.402958  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:12.402975  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:12.455885  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:12.455929  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:12.472119  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:12.472160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:12.551019  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:12.551041  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:12.551054  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:12.633560  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:12.633606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.176459  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:15.191013  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:15.191083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:15.243825  433881 cri.go:89] found id: ""
	I0408 12:50:15.243852  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.243861  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:15.243867  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:15.243918  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:15.282768  433881 cri.go:89] found id: ""
	I0408 12:50:15.282803  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.282816  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:15.282824  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:15.282893  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:15.318418  433881 cri.go:89] found id: ""
	I0408 12:50:15.318447  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.318455  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:15.318463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:15.318540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:15.354071  433881 cri.go:89] found id: ""
	I0408 12:50:15.354109  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.354125  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:15.354133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:15.354205  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:15.397142  433881 cri.go:89] found id: ""
	I0408 12:50:15.397176  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.397185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:15.397191  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:15.397253  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:15.436798  433881 cri.go:89] found id: ""
	I0408 12:50:15.436832  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.436843  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:15.436851  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:15.436916  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:15.475792  433881 cri.go:89] found id: ""
	I0408 12:50:15.475823  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.475836  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:15.475844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:15.475917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:15.526277  433881 cri.go:89] found id: ""
	I0408 12:50:15.526323  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.526335  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:15.526348  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:15.526365  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:15.601590  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:15.601616  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:15.601631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:15.681784  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:15.681842  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.725300  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:15.725345  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:15.778579  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:15.778627  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:13.429712  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.430255  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:17.106153  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:19.607656  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.217812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.719543  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.296690  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:18.310554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:18.310623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:18.350635  433881 cri.go:89] found id: ""
	I0408 12:50:18.350673  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.350685  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:18.350693  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:18.350756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:18.391943  433881 cri.go:89] found id: ""
	I0408 12:50:18.391974  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.391984  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:18.391990  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:18.392059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:18.433191  433881 cri.go:89] found id: ""
	I0408 12:50:18.433226  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.433237  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:18.433246  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:18.433310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:18.471600  433881 cri.go:89] found id: ""
	I0408 12:50:18.471629  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.471641  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:18.471649  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:18.471737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:18.507180  433881 cri.go:89] found id: ""
	I0408 12:50:18.507219  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.507228  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:18.507242  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:18.507307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:18.553894  433881 cri.go:89] found id: ""
	I0408 12:50:18.553924  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.553939  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:18.553947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:18.554013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:18.593823  433881 cri.go:89] found id: ""
	I0408 12:50:18.593860  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.593870  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:18.593878  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:18.593934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:18.636636  433881 cri.go:89] found id: ""
	I0408 12:50:18.636667  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.636679  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:18.636692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:18.636709  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:18.690597  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:18.690640  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:18.706484  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:18.706537  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:18.795390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:18.795419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:18.795434  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:18.873458  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:18.873518  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:21.420942  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:21.436200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:21.436262  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:21.473194  433881 cri.go:89] found id: ""
	I0408 12:50:21.473228  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.473237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:21.473244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:21.473297  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:21.510496  433881 cri.go:89] found id: ""
	I0408 12:50:21.510534  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.510547  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:21.510556  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:21.510618  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:21.550290  433881 cri.go:89] found id: ""
	I0408 12:50:21.550329  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.550337  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:21.550344  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:21.550399  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:21.586192  433881 cri.go:89] found id: ""
	I0408 12:50:21.586229  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.586241  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:21.586252  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:21.586316  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:17.930126  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.430210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:22.107118  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.107812  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:23.217266  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:25.218476  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:21.645888  433881 cri.go:89] found id: ""
	I0408 12:50:21.645925  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.645937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:21.645945  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:21.646012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:21.710384  433881 cri.go:89] found id: ""
	I0408 12:50:21.710416  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.710429  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:21.710437  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:21.710503  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:21.773423  433881 cri.go:89] found id: ""
	I0408 12:50:21.773458  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.773467  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:21.773473  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:21.773536  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:21.814353  433881 cri.go:89] found id: ""
	I0408 12:50:21.814389  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.814401  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:21.814415  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:21.814437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:21.866744  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:21.866783  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:21.883577  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:21.883617  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:21.963339  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:21.963362  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:21.963379  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:22.044959  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:22.045017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:24.589027  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:24.603707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:24.603797  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:24.648525  433881 cri.go:89] found id: ""
	I0408 12:50:24.648566  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.648579  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:24.648587  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:24.648656  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:24.693788  433881 cri.go:89] found id: ""
	I0408 12:50:24.693827  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.693840  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:24.693850  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:24.693925  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:24.734461  433881 cri.go:89] found id: ""
	I0408 12:50:24.734499  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.734507  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:24.734514  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:24.734578  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:24.781723  433881 cri.go:89] found id: ""
	I0408 12:50:24.781759  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.781772  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:24.781780  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:24.781850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:24.823060  433881 cri.go:89] found id: ""
	I0408 12:50:24.823091  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.823101  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:24.823109  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:24.823195  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:24.858847  433881 cri.go:89] found id: ""
	I0408 12:50:24.858887  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.858899  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:24.858913  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:24.858968  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:24.899075  433881 cri.go:89] found id: ""
	I0408 12:50:24.899113  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.899125  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:24.899133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:24.899216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:24.941839  433881 cri.go:89] found id: ""
	I0408 12:50:24.941876  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.941886  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:24.941897  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:24.941911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:24.993358  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:24.993402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:25.010857  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:25.010892  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:25.098985  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:25.099017  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:25.099035  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:25.179115  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:25.179172  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:22.928804  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.930608  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:26.607216  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:28.608092  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.717812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:30.218079  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.726080  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:27.740646  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:27.740739  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:27.781567  433881 cri.go:89] found id: ""
	I0408 12:50:27.781612  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.781623  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:27.781630  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:27.781696  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:27.823034  433881 cri.go:89] found id: ""
	I0408 12:50:27.823077  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.823090  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:27.823099  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:27.823174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:27.862738  433881 cri.go:89] found id: ""
	I0408 12:50:27.862797  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.862822  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:27.862832  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:27.862917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:27.905821  433881 cri.go:89] found id: ""
	I0408 12:50:27.905862  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.905874  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:27.905884  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:27.905954  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:27.949580  433881 cri.go:89] found id: ""
	I0408 12:50:27.949613  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.949625  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:27.949634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:27.949721  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:27.989453  433881 cri.go:89] found id: ""
	I0408 12:50:27.989488  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.989496  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:27.989502  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:27.989560  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:28.031983  433881 cri.go:89] found id: ""
	I0408 12:50:28.032015  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.032027  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:28.032035  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:28.032114  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:28.072851  433881 cri.go:89] found id: ""
	I0408 12:50:28.072884  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.072895  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:28.072910  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:28.072927  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:28.116117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:28.116160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:28.170098  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:28.170142  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:28.184820  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:28.184860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:28.261324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:28.261355  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:28.261384  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:30.837906  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:30.853871  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:30.853969  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:30.896197  433881 cri.go:89] found id: ""
	I0408 12:50:30.896228  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.896237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:30.896244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:30.896296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:30.938689  433881 cri.go:89] found id: ""
	I0408 12:50:30.938726  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.938740  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:30.938758  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:30.938840  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:30.980883  433881 cri.go:89] found id: ""
	I0408 12:50:30.980918  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.980929  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:30.980937  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:30.981008  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:31.018262  433881 cri.go:89] found id: ""
	I0408 12:50:31.018291  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.018305  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:31.018314  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:31.018382  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:31.055397  433881 cri.go:89] found id: ""
	I0408 12:50:31.055430  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.055443  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:31.055452  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:31.055527  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:31.091476  433881 cri.go:89] found id: ""
	I0408 12:50:31.091511  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.091523  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:31.091531  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:31.091583  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:31.130285  433881 cri.go:89] found id: ""
	I0408 12:50:31.130326  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.130337  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:31.130345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:31.130419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:31.168196  433881 cri.go:89] found id: ""
	I0408 12:50:31.168227  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.168236  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:31.168246  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:31.168258  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:31.220612  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:31.220652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:31.236718  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:31.236754  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:31.310550  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:31.310574  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:31.310588  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:31.387376  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:31.387420  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:27.429985  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:29.928718  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:31.106901  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.606293  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:32.717659  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.217468  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.932307  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:33.946664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:33.946754  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:33.991321  433881 cri.go:89] found id: ""
	I0408 12:50:33.991359  433881 logs.go:276] 0 containers: []
	W0408 12:50:33.991371  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:33.991381  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:33.991451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:34.033989  433881 cri.go:89] found id: ""
	I0408 12:50:34.034024  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.034034  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:34.034041  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:34.034125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:34.081140  433881 cri.go:89] found id: ""
	I0408 12:50:34.081183  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.081192  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:34.081199  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:34.081258  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:34.122332  433881 cri.go:89] found id: ""
	I0408 12:50:34.122365  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.122376  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:34.122384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:34.122451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:34.161307  433881 cri.go:89] found id: ""
	I0408 12:50:34.161353  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.161378  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:34.161387  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:34.161460  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:34.199664  433881 cri.go:89] found id: ""
	I0408 12:50:34.199715  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.199727  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:34.199736  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:34.199816  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:34.242044  433881 cri.go:89] found id: ""
	I0408 12:50:34.242077  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.242087  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:34.242094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:34.242159  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:34.277852  433881 cri.go:89] found id: ""
	I0408 12:50:34.277893  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.277908  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:34.277920  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:34.277940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:34.329572  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:34.329614  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:34.343823  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:34.343854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:34.422625  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:34.422652  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:34.422670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:34.504605  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:34.504653  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:31.928982  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.929758  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.930610  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:36.110235  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:38.606389  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.217645  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:39.218104  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.050790  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:37.065111  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:37.065179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:37.108541  433881 cri.go:89] found id: ""
	I0408 12:50:37.108573  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.108583  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:37.108590  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:37.108655  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:37.145207  433881 cri.go:89] found id: ""
	I0408 12:50:37.145241  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.145256  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:37.145264  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:37.145332  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:37.182788  433881 cri.go:89] found id: ""
	I0408 12:50:37.182823  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.182836  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:37.182844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:37.182917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:37.222780  433881 cri.go:89] found id: ""
	I0408 12:50:37.222804  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.222813  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:37.222819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:37.222884  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:37.261653  433881 cri.go:89] found id: ""
	I0408 12:50:37.261703  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.261715  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:37.261725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:37.261795  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:37.300613  433881 cri.go:89] found id: ""
	I0408 12:50:37.300642  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.300651  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:37.300657  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:37.300720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:37.344252  433881 cri.go:89] found id: ""
	I0408 12:50:37.344289  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.344302  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:37.344311  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:37.344380  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:37.382644  433881 cri.go:89] found id: ""
	I0408 12:50:37.382682  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.382695  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:37.382708  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:37.382725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:37.437205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:37.437248  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:37.451772  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:37.451806  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:37.535578  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:37.535604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:37.535618  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:37.618315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:37.618358  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.160025  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:40.173704  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:40.173770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:40.212527  433881 cri.go:89] found id: ""
	I0408 12:50:40.212564  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.212576  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:40.212584  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:40.212648  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:40.250802  433881 cri.go:89] found id: ""
	I0408 12:50:40.250833  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.250841  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:40.250848  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:40.250910  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:40.292534  433881 cri.go:89] found id: ""
	I0408 12:50:40.292565  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.292576  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:40.292584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:40.292641  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:40.329973  433881 cri.go:89] found id: ""
	I0408 12:50:40.330004  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.330017  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:40.330027  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:40.330083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:40.367351  433881 cri.go:89] found id: ""
	I0408 12:50:40.367381  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.367390  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:40.367397  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:40.367462  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:40.404499  433881 cri.go:89] found id: ""
	I0408 12:50:40.404535  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.404546  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:40.404556  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:40.404624  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:40.448208  433881 cri.go:89] found id: ""
	I0408 12:50:40.448244  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.448254  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:40.448263  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:40.448318  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:40.490191  433881 cri.go:89] found id: ""
	I0408 12:50:40.490225  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.490235  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:40.490246  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:40.490262  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:40.507079  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:40.507119  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:40.584844  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:40.584880  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:40.584905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:40.665416  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:40.665461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.710289  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:40.710331  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:38.429765  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.430575  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.607976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.106175  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:45.107548  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:41.716953  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.717149  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.267848  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:43.283094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:43.283192  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:43.321609  433881 cri.go:89] found id: ""
	I0408 12:50:43.321643  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.321655  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:43.321664  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:43.321732  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:43.361550  433881 cri.go:89] found id: ""
	I0408 12:50:43.361587  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.361599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:43.361608  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:43.361686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:43.398332  433881 cri.go:89] found id: ""
	I0408 12:50:43.398373  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.398386  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:43.398394  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:43.398463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:43.436808  433881 cri.go:89] found id: ""
	I0408 12:50:43.436836  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.436844  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:43.436850  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:43.436901  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:43.475475  433881 cri.go:89] found id: ""
	I0408 12:50:43.475512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.475524  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:43.475533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:43.475600  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:43.515481  433881 cri.go:89] found id: ""
	I0408 12:50:43.515512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.515521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:43.515530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:43.515599  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:43.555358  433881 cri.go:89] found id: ""
	I0408 12:50:43.555388  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.555410  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:43.555420  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:43.555476  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:43.590192  433881 cri.go:89] found id: ""
	I0408 12:50:43.590240  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.590253  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:43.590265  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:43.590281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:43.643642  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:43.643699  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:43.659375  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:43.659405  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:43.739721  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:43.739743  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:43.739760  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:43.821107  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:43.821152  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:46.364937  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:46.378208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:46.378295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:46.415217  433881 cri.go:89] found id: ""
	I0408 12:50:46.415251  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.415263  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:46.415272  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:46.415336  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:46.453886  433881 cri.go:89] found id: ""
	I0408 12:50:46.453921  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.453930  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:46.453936  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:46.453992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:46.491443  433881 cri.go:89] found id: ""
	I0408 12:50:46.491475  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.491488  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:46.491496  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:46.491565  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:46.535815  433881 cri.go:89] found id: ""
	I0408 12:50:46.535845  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.535854  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:46.535860  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:46.535921  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:46.577704  433881 cri.go:89] found id: ""
	I0408 12:50:46.577814  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.577826  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:46.577835  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:46.577915  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:42.928908  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:44.929425  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:47.606676  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.608190  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.217528  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:48.717623  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:50.729538  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.624693  433881 cri.go:89] found id: ""
	I0408 12:50:46.624723  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.624731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:46.624738  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:46.624791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:46.659410  433881 cri.go:89] found id: ""
	I0408 12:50:46.659462  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.659474  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:46.659482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:46.659547  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:46.694881  433881 cri.go:89] found id: ""
	I0408 12:50:46.694912  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.694926  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:46.694937  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:46.694954  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:46.751416  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:46.751464  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:46.767739  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:46.767779  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:46.854021  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:46.854050  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:46.854066  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.937214  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:46.937252  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.479829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:49.494083  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:49.494156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:49.532518  433881 cri.go:89] found id: ""
	I0408 12:50:49.532555  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.532563  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:49.532569  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:49.532632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:49.571054  433881 cri.go:89] found id: ""
	I0408 12:50:49.571086  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.571111  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:49.571119  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:49.571199  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:49.607025  433881 cri.go:89] found id: ""
	I0408 12:50:49.607061  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.607071  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:49.607080  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:49.607156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:49.646890  433881 cri.go:89] found id: ""
	I0408 12:50:49.646921  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.646930  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:49.646939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:49.647009  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:49.688671  433881 cri.go:89] found id: ""
	I0408 12:50:49.688707  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.688719  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:49.688728  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:49.688800  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:49.726687  433881 cri.go:89] found id: ""
	I0408 12:50:49.726724  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.726735  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:49.726741  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:49.726808  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:49.767311  433881 cri.go:89] found id: ""
	I0408 12:50:49.767344  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.767353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:49.767360  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:49.767414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:49.803409  433881 cri.go:89] found id: ""
	I0408 12:50:49.803442  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.803452  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:49.803463  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:49.803478  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.842738  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:49.842767  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:49.895264  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:49.895318  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:49.910300  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:49.910332  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:50.005994  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:50.006031  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:50.006048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.929626  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.429810  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.106861  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.608143  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:53.217707  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:55.718120  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.589266  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:52.603202  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:52.603308  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:52.640493  433881 cri.go:89] found id: ""
	I0408 12:50:52.640525  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.640540  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:52.640550  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:52.640613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:52.680230  433881 cri.go:89] found id: ""
	I0408 12:50:52.680271  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.680284  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:52.680293  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:52.680355  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:52.724048  433881 cri.go:89] found id: ""
	I0408 12:50:52.724084  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.724096  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:52.724104  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:52.724171  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:52.776926  433881 cri.go:89] found id: ""
	I0408 12:50:52.776960  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.776973  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:52.776982  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:52.777059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:52.814738  433881 cri.go:89] found id: ""
	I0408 12:50:52.814770  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.814781  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:52.814788  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:52.814842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:52.854463  433881 cri.go:89] found id: ""
	I0408 12:50:52.854501  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.854511  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:52.854521  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:52.854591  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:52.896180  433881 cri.go:89] found id: ""
	I0408 12:50:52.896209  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.896218  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:52.896224  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:52.896279  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:52.931890  433881 cri.go:89] found id: ""
	I0408 12:50:52.931932  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.931944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:52.931956  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:52.931973  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:53.013345  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:53.013368  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:53.013385  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:53.092792  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:53.092834  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:53.142678  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:53.142713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:53.196378  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:53.196429  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:55.713265  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:55.729253  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:55.729341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:55.772259  433881 cri.go:89] found id: ""
	I0408 12:50:55.772303  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.772317  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:55.772325  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:55.772398  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:55.816146  433881 cri.go:89] found id: ""
	I0408 12:50:55.816178  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.816188  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:55.816194  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:55.816247  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:55.857896  433881 cri.go:89] found id: ""
	I0408 12:50:55.857935  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.857947  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:55.857955  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:55.858025  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:55.896337  433881 cri.go:89] found id: ""
	I0408 12:50:55.896374  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.896386  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:55.896395  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:55.896463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:55.936373  433881 cri.go:89] found id: ""
	I0408 12:50:55.936419  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.936430  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:55.936439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:55.936514  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:55.996751  433881 cri.go:89] found id: ""
	I0408 12:50:55.996782  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.996793  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:55.996802  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:55.996866  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:56.038910  433881 cri.go:89] found id: ""
	I0408 12:50:56.038948  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.038956  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:56.038962  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:56.039018  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:56.078147  433881 cri.go:89] found id: ""
	I0408 12:50:56.078185  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.078195  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:56.078206  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:56.078223  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:56.137679  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:56.137725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:56.153067  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:56.153101  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:56.242398  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:56.242422  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:56.242436  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:56.325353  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:56.325402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:51.929891  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.430216  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:57.106572  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.108219  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.216315  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:00.218162  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.867789  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:58.881570  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:58.881640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:58.918941  433881 cri.go:89] found id: ""
	I0408 12:50:58.918971  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.918980  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:58.918987  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:58.919041  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:58.956339  433881 cri.go:89] found id: ""
	I0408 12:50:58.956375  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.956387  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:58.956395  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:58.956448  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:58.998045  433881 cri.go:89] found id: ""
	I0408 12:50:58.998075  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.998087  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:58.998113  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:58.998186  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:59.037694  433881 cri.go:89] found id: ""
	I0408 12:50:59.037724  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.037736  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:59.037744  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:59.037813  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:59.079404  433881 cri.go:89] found id: ""
	I0408 12:50:59.079436  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.079448  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:59.079458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:59.079525  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:59.117535  433881 cri.go:89] found id: ""
	I0408 12:50:59.117566  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.117585  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:59.117593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:59.117661  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:59.163144  433881 cri.go:89] found id: ""
	I0408 12:50:59.163177  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.163190  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:59.163200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:59.163295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:59.201778  433881 cri.go:89] found id: ""
	I0408 12:50:59.201815  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.201827  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:59.201840  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:59.201857  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:59.256688  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:59.256730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:59.272631  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:59.272670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:59.345194  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:59.345219  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:59.345233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:59.420807  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:59.420873  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:56.931254  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.429578  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.606793  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.105581  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:02.218796  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.718232  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.966779  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:01.992790  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:01.992868  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:02.032532  433881 cri.go:89] found id: ""
	I0408 12:51:02.032580  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.032592  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:02.032603  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:02.032684  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:02.070377  433881 cri.go:89] found id: ""
	I0408 12:51:02.070405  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.070412  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:02.070418  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:02.070481  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:02.109543  433881 cri.go:89] found id: ""
	I0408 12:51:02.109569  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.109577  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:02.109584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:02.109639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:02.148009  433881 cri.go:89] found id: ""
	I0408 12:51:02.148049  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.148062  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:02.148078  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:02.148144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:02.184318  433881 cri.go:89] found id: ""
	I0408 12:51:02.184351  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.184362  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:02.184371  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:02.184469  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:02.225491  433881 cri.go:89] found id: ""
	I0408 12:51:02.225534  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.225545  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:02.225554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:02.225628  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:02.269401  433881 cri.go:89] found id: ""
	I0408 12:51:02.269439  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.269447  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:02.269454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:02.269517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:02.310153  433881 cri.go:89] found id: ""
	I0408 12:51:02.310189  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.310197  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:02.310209  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:02.310224  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:02.326077  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:02.326111  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:02.402369  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:02.402394  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:02.402410  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:02.483819  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:02.483866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:02.527581  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:02.527628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:05.083167  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:05.097986  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:05.098063  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:05.139396  433881 cri.go:89] found id: ""
	I0408 12:51:05.139434  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.139446  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:05.139464  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:05.139568  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:05.176882  433881 cri.go:89] found id: ""
	I0408 12:51:05.176918  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.176931  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:05.176940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:05.177012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:05.216426  433881 cri.go:89] found id: ""
	I0408 12:51:05.216459  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.216478  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:05.216486  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:05.216598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:05.254724  433881 cri.go:89] found id: ""
	I0408 12:51:05.254754  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.254762  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:05.254768  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:05.254821  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:05.291361  433881 cri.go:89] found id: ""
	I0408 12:51:05.291388  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.291397  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:05.291403  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:05.291453  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:05.329102  433881 cri.go:89] found id: ""
	I0408 12:51:05.329134  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.329145  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:05.329152  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:05.329216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:05.368614  433881 cri.go:89] found id: ""
	I0408 12:51:05.368657  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.368666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:05.368674  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:05.368727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:05.412151  433881 cri.go:89] found id: ""
	I0408 12:51:05.412182  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.412196  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:05.412208  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:05.412227  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:05.428329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:05.428364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:05.509452  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:05.509481  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:05.509500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:05.586831  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:05.586882  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:05.636175  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:05.636213  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:01.929336  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:03.929754  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.429604  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.106159  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.608247  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:07.216779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:09.217275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.189786  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:08.205609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:08.205686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:08.256556  433881 cri.go:89] found id: ""
	I0408 12:51:08.256586  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.256597  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:08.256607  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:08.256664  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:08.309126  433881 cri.go:89] found id: ""
	I0408 12:51:08.309163  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.309176  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:08.309184  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:08.309259  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:08.350669  433881 cri.go:89] found id: ""
	I0408 12:51:08.350699  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.350708  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:08.350716  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:08.350766  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:08.392122  433881 cri.go:89] found id: ""
	I0408 12:51:08.392156  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.392164  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:08.392171  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:08.392223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:08.435571  433881 cri.go:89] found id: ""
	I0408 12:51:08.435603  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.435616  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:08.435624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:08.435708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.474285  433881 cri.go:89] found id: ""
	I0408 12:51:08.474322  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.474334  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:08.474345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:08.474419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:08.521060  433881 cri.go:89] found id: ""
	I0408 12:51:08.521101  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.521109  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:08.521116  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:08.521185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:08.559967  433881 cri.go:89] found id: ""
	I0408 12:51:08.560013  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.560026  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:08.560051  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:08.560068  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:08.614926  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:08.614966  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:08.639012  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:08.639059  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:08.755572  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:08.755604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:08.755621  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:08.836005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:08.836050  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:11.383048  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:11.397692  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:11.397763  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:11.439445  433881 cri.go:89] found id: ""
	I0408 12:51:11.439482  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.439494  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:11.439503  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:11.439558  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:11.478262  433881 cri.go:89] found id: ""
	I0408 12:51:11.478297  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.478309  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:11.478318  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:11.478392  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:11.518012  433881 cri.go:89] found id: ""
	I0408 12:51:11.518049  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.518063  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:11.518071  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:11.518137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:11.557519  433881 cri.go:89] found id: ""
	I0408 12:51:11.557551  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.557563  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:11.557571  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:11.557644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:11.595494  433881 cri.go:89] found id: ""
	I0408 12:51:11.595528  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.595541  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:11.595550  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:11.595622  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.929238  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:10.929862  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.107603  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.611978  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.718498  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.635667  433881 cri.go:89] found id: ""
	I0408 12:51:11.635719  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.635731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:11.635740  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:11.635806  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:11.675521  433881 cri.go:89] found id: ""
	I0408 12:51:11.675553  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.675562  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:11.675568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:11.675623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:11.720983  433881 cri.go:89] found id: ""
	I0408 12:51:11.721016  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.721029  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:11.721041  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:11.721055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:11.775418  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:11.775462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:11.790019  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:11.790061  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:11.867479  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:11.867512  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:11.867530  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:11.944546  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:11.944594  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:14.487829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:14.501277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:14.501356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:14.539996  433881 cri.go:89] found id: ""
	I0408 12:51:14.540031  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.540043  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:14.540054  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:14.540125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:14.580611  433881 cri.go:89] found id: ""
	I0408 12:51:14.580646  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.580658  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:14.580667  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:14.580729  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:14.623459  433881 cri.go:89] found id: ""
	I0408 12:51:14.623497  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.623509  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:14.623518  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:14.623593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:14.666904  433881 cri.go:89] found id: ""
	I0408 12:51:14.666944  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.666953  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:14.666959  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:14.667012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:14.709136  433881 cri.go:89] found id: ""
	I0408 12:51:14.709169  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.709178  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:14.709183  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:14.709234  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:14.757342  433881 cri.go:89] found id: ""
	I0408 12:51:14.757377  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.757390  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:14.757402  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:14.757477  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:14.795210  433881 cri.go:89] found id: ""
	I0408 12:51:14.795249  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.795262  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:14.795271  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:14.795329  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:14.833782  433881 cri.go:89] found id: ""
	I0408 12:51:14.833813  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.833821  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:14.833831  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:14.833843  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:14.892985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:14.893030  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:14.909567  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:14.909615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:14.988447  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:14.988473  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:14.988494  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:15.068404  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:15.068446  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:12.931867  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:15.430299  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.106552  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.106622  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.108053  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.217595  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.217758  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.220115  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:17.617145  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:17.630439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:17.630520  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:17.672814  433881 cri.go:89] found id: ""
	I0408 12:51:17.672845  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.672853  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:17.672860  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:17.672936  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:17.715344  433881 cri.go:89] found id: ""
	I0408 12:51:17.715378  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.715391  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:17.715399  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:17.715464  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:17.757246  433881 cri.go:89] found id: ""
	I0408 12:51:17.757283  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.757295  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:17.757304  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:17.757373  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:17.798201  433881 cri.go:89] found id: ""
	I0408 12:51:17.798236  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.798245  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:17.798250  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:17.798312  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:17.838243  433881 cri.go:89] found id: ""
	I0408 12:51:17.838280  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.838296  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:17.838305  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:17.838376  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:17.877394  433881 cri.go:89] found id: ""
	I0408 12:51:17.877433  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.877446  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:17.877455  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:17.877522  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:17.917513  433881 cri.go:89] found id: ""
	I0408 12:51:17.917546  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.917557  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:17.917564  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:17.917631  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:17.959806  433881 cri.go:89] found id: ""
	I0408 12:51:17.959841  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.959854  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:17.959872  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:17.959888  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:17.974835  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:17.974866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:18.051066  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:18.051096  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:18.051110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:18.130246  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:18.130294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:18.177977  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:18.178009  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:20.732943  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:20.747177  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:20.747250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:20.793434  433881 cri.go:89] found id: ""
	I0408 12:51:20.793462  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.793472  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:20.793478  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:20.793554  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:20.830880  433881 cri.go:89] found id: ""
	I0408 12:51:20.830915  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.830925  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:20.830931  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:20.830986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:20.865660  433881 cri.go:89] found id: ""
	I0408 12:51:20.865698  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.865710  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:20.865718  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:20.865787  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:20.905977  433881 cri.go:89] found id: ""
	I0408 12:51:20.906009  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.906018  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:20.906023  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:20.906078  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:20.949244  433881 cri.go:89] found id: ""
	I0408 12:51:20.949273  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.949281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:20.949288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:20.949346  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:20.987438  433881 cri.go:89] found id: ""
	I0408 12:51:20.987466  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.987475  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:20.987482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:20.987534  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:21.028061  433881 cri.go:89] found id: ""
	I0408 12:51:21.028106  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.028123  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:21.028130  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:21.028187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:21.065115  433881 cri.go:89] found id: ""
	I0408 12:51:21.065147  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.065160  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:21.065171  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:21.065186  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:21.142100  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:21.142143  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:21.186259  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:21.186294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:21.242038  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:21.242085  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:21.257483  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:21.257526  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:21.336027  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:17.930896  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.430609  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.108741  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.605215  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.716480  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.720217  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:23.836494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:23.850931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:23.851001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:23.889352  433881 cri.go:89] found id: ""
	I0408 12:51:23.889385  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.889397  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:23.889406  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:23.889467  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:23.925240  433881 cri.go:89] found id: ""
	I0408 12:51:23.925271  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.925280  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:23.925286  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:23.925341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:23.965369  433881 cri.go:89] found id: ""
	I0408 12:51:23.965398  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.965410  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:23.965417  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:23.965478  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:24.004828  433881 cri.go:89] found id: ""
	I0408 12:51:24.004864  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.004875  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:24.004882  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:24.004955  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:24.046959  433881 cri.go:89] found id: ""
	I0408 12:51:24.046996  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.047013  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:24.047022  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:24.047104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:24.085408  433881 cri.go:89] found id: ""
	I0408 12:51:24.085447  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.085459  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:24.085468  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:24.085533  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:24.124156  433881 cri.go:89] found id: ""
	I0408 12:51:24.124193  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.124205  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:24.124214  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:24.124280  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:24.159973  433881 cri.go:89] found id: ""
	I0408 12:51:24.160011  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.160023  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:24.160037  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:24.160055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:24.238998  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:24.239047  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:24.282401  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:24.282439  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:24.339279  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:24.339328  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:24.354927  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:24.354965  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:24.432192  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:22.929962  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:25.430340  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.605294  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:28.606623  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:27.218727  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.716524  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.932361  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:26.947709  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:26.947779  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:26.992251  433881 cri.go:89] found id: ""
	I0408 12:51:26.992282  433881 logs.go:276] 0 containers: []
	W0408 12:51:26.992290  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:26.992297  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:26.992366  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:27.033517  433881 cri.go:89] found id: ""
	I0408 12:51:27.033548  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.033560  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:27.033568  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:27.033635  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:27.072593  433881 cri.go:89] found id: ""
	I0408 12:51:27.072628  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.072641  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:27.072650  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:27.072726  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:27.115728  433881 cri.go:89] found id: ""
	I0408 12:51:27.115761  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.115771  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:27.115779  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:27.115846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:27.154218  433881 cri.go:89] found id: ""
	I0408 12:51:27.154254  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.154266  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:27.154274  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:27.154341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:27.193084  433881 cri.go:89] found id: ""
	I0408 12:51:27.193118  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.193134  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:27.193142  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:27.193216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:27.233401  433881 cri.go:89] found id: ""
	I0408 12:51:27.233436  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.233449  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:27.233458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:27.233524  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:27.274272  433881 cri.go:89] found id: ""
	I0408 12:51:27.274307  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.274316  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:27.274325  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:27.274339  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:27.316918  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:27.316956  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:27.371970  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:27.372014  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.387640  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:27.387679  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:27.468583  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:27.468611  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:27.468628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.049078  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:30.063661  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:30.063750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:30.102000  433881 cri.go:89] found id: ""
	I0408 12:51:30.102031  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.102049  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:30.102058  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:30.102120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:30.144972  433881 cri.go:89] found id: ""
	I0408 12:51:30.145001  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.145010  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:30.145017  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:30.145076  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:30.185179  433881 cri.go:89] found id: ""
	I0408 12:51:30.185250  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.185274  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:30.185284  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:30.185356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:30.224138  433881 cri.go:89] found id: ""
	I0408 12:51:30.224169  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.224178  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:30.224185  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:30.224245  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:30.262754  433881 cri.go:89] found id: ""
	I0408 12:51:30.262788  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.262800  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:30.262809  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:30.262872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:30.296574  433881 cri.go:89] found id: ""
	I0408 12:51:30.296608  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.296617  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:30.296624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:30.296685  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:30.337619  433881 cri.go:89] found id: ""
	I0408 12:51:30.337653  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.337665  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:30.337672  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:30.337737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:30.378808  433881 cri.go:89] found id: ""
	I0408 12:51:30.378837  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.378849  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:30.378860  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:30.378876  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:30.462867  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:30.462895  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:30.462911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.549824  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:30.549871  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:30.594270  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:30.594302  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:30.650199  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:30.650247  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.430647  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.929105  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:30.607227  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.106814  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.106890  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:31.716747  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.718962  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.166177  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:33.181168  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:33.181277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:33.220931  433881 cri.go:89] found id: ""
	I0408 12:51:33.220960  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.220970  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:33.220976  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:33.221043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:33.267118  433881 cri.go:89] found id: ""
	I0408 12:51:33.267155  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.267168  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:33.267177  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:33.267250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:33.308486  433881 cri.go:89] found id: ""
	I0408 12:51:33.308522  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.308532  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:33.308540  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:33.308614  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:33.344735  433881 cri.go:89] found id: ""
	I0408 12:51:33.344773  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.344785  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:33.344793  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:33.344857  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:33.384130  433881 cri.go:89] found id: ""
	I0408 12:51:33.384162  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.384175  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:33.384184  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:33.384246  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:33.422187  433881 cri.go:89] found id: ""
	I0408 12:51:33.422224  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.422236  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:33.422244  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:33.422309  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:33.462281  433881 cri.go:89] found id: ""
	I0408 12:51:33.462310  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.462320  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:33.462326  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:33.462412  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:33.501273  433881 cri.go:89] found id: ""
	I0408 12:51:33.501304  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.501315  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:33.501329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:33.501347  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:33.573407  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:33.573435  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:33.573453  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:33.659573  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:33.659615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:33.712568  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:33.712600  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:33.769457  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:33.769500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.285759  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:36.302490  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:36.302576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:36.341170  433881 cri.go:89] found id: ""
	I0408 12:51:36.341204  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.341218  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:36.341227  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:36.341296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:36.380366  433881 cri.go:89] found id: ""
	I0408 12:51:36.380395  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.380403  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:36.380411  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:36.380485  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:36.428755  433881 cri.go:89] found id: ""
	I0408 12:51:36.428786  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.428795  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:36.428801  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:36.428852  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:36.473849  433881 cri.go:89] found id: ""
	I0408 12:51:36.473893  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.473921  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:36.473930  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:36.474001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:36.513922  433881 cri.go:89] found id: ""
	I0408 12:51:36.513967  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.513980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:36.513989  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:36.514059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:36.557731  433881 cri.go:89] found id: ""
	I0408 12:51:36.557768  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.557777  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:36.557784  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:36.557861  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:36.601978  433881 cri.go:89] found id: ""
	I0408 12:51:36.602010  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.602020  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:36.602031  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:36.602099  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:31.930145  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.931893  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.932546  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:37.606783  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:39.607738  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.217708  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:38.717067  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.721801  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.645189  433881 cri.go:89] found id: ""
	I0408 12:51:36.645226  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.645244  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:36.645257  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:36.645276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:36.739293  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:36.739346  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:36.786962  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:36.787001  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:36.842456  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:36.842499  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.857848  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:36.857883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:36.939227  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:39.440047  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:39.456206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:39.456304  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:39.497752  433881 cri.go:89] found id: ""
	I0408 12:51:39.497792  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.497804  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:39.497815  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:39.497882  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:39.536192  433881 cri.go:89] found id: ""
	I0408 12:51:39.536224  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.536237  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:39.536245  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:39.536315  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:39.573874  433881 cri.go:89] found id: ""
	I0408 12:51:39.573917  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.573932  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:39.573939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:39.574004  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:39.614525  433881 cri.go:89] found id: ""
	I0408 12:51:39.614562  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.614577  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:39.614585  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:39.614651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:39.654414  433881 cri.go:89] found id: ""
	I0408 12:51:39.654455  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.654467  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:39.654476  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:39.654549  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:39.691814  433881 cri.go:89] found id: ""
	I0408 12:51:39.691847  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.691860  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:39.691868  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:39.691939  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:39.735572  433881 cri.go:89] found id: ""
	I0408 12:51:39.735609  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.735622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:39.735630  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:39.735707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:39.778827  433881 cri.go:89] found id: ""
	I0408 12:51:39.778860  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.778870  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:39.778881  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:39.778894  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:39.857861  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:39.857903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:39.901597  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:39.901652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:39.955660  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:39.955730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:39.972424  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:39.972461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:40.052884  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:38.429490  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.932035  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:42.106879  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:44.607134  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:41.210350  433557 pod_ready.go:81] duration metric: took 4m0.000311819s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:41.210399  433557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0408 12:51:41.210413  433557 pod_ready.go:38] duration metric: took 4m3.201150727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:41.210464  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:51:41.210520  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:41.210591  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:41.269963  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:41.269999  433557 cri.go:89] found id: ""
	I0408 12:51:41.270010  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:41.270072  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.275411  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:41.275495  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:41.319478  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:41.319517  433557 cri.go:89] found id: ""
	I0408 12:51:41.319529  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:41.319590  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.329956  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:41.330045  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:41.380017  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:41.380049  433557 cri.go:89] found id: ""
	I0408 12:51:41.380061  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:41.380131  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.384973  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:41.385077  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:41.429757  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:41.429786  433557 cri.go:89] found id: ""
	I0408 12:51:41.429798  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:41.429863  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.435404  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:41.435488  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:41.484998  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:41.485031  433557 cri.go:89] found id: ""
	I0408 12:51:41.485042  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:41.485111  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.489802  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:41.489878  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:41.543982  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.544016  433557 cri.go:89] found id: ""
	I0408 12:51:41.544028  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:41.544096  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.548766  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:41.548836  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:41.588398  433557 cri.go:89] found id: ""
	I0408 12:51:41.588425  433557 logs.go:276] 0 containers: []
	W0408 12:51:41.588433  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:41.588439  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:41.588498  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:41.635748  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:41.635771  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:41.635775  433557 cri.go:89] found id: ""
	I0408 12:51:41.635782  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:41.635849  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.641800  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.646173  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:41.646206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.717189  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:41.717228  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:41.779618  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:41.779653  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:41.840050  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:41.840092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:41.855982  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:41.856016  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:42.016416  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:42.016455  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:42.085493  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:42.085538  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:42.132590  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:42.132626  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:42.642069  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:42.642125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:42.708516  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:42.708566  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:42.759072  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:42.759125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:42.810189  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:42.810242  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:42.855931  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:42.855971  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.396658  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.414640  433557 api_server.go:72] duration metric: took 4m14.728700184s to wait for apiserver process to appear ...
	I0408 12:51:45.414671  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:51:45.414714  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.414772  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.460983  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:45.461012  433557 cri.go:89] found id: ""
	I0408 12:51:45.461023  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:45.461102  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.466928  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.467037  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.516723  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:45.516746  433557 cri.go:89] found id: ""
	I0408 12:51:45.516755  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:45.516813  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.521315  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.521413  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.560838  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.560865  433557 cri.go:89] found id: ""
	I0408 12:51:45.560876  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:45.560926  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.565852  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.565937  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.610154  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:45.610175  433557 cri.go:89] found id: ""
	I0408 12:51:45.610183  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:45.610229  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.615014  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.615098  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.658261  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:45.658292  433557 cri.go:89] found id: ""
	I0408 12:51:45.658304  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:45.658367  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.663148  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.663242  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:45.708805  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.708838  433557 cri.go:89] found id: ""
	I0408 12:51:45.708850  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:45.708906  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.713733  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:45.713800  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:45.763432  433557 cri.go:89] found id: ""
	I0408 12:51:45.763465  433557 logs.go:276] 0 containers: []
	W0408 12:51:45.763477  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:45.763486  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:45.763555  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:45.808689  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:45.808711  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.808715  433557 cri.go:89] found id: ""
	I0408 12:51:45.808723  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:45.808782  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.813386  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.818556  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:45.818589  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:42.553021  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:42.569100  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:42.569174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:42.612835  433881 cri.go:89] found id: ""
	I0408 12:51:42.612870  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.612882  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:42.612891  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:42.612965  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:42.653224  433881 cri.go:89] found id: ""
	I0408 12:51:42.653266  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.653276  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:42.653285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:42.653351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:42.703612  433881 cri.go:89] found id: ""
	I0408 12:51:42.703648  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.703658  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:42.703664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:42.703756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:42.749765  433881 cri.go:89] found id: ""
	I0408 12:51:42.749799  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.749810  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:42.749818  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:42.749894  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:42.794008  433881 cri.go:89] found id: ""
	I0408 12:51:42.794042  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.794054  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:42.794064  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:42.794132  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:42.838099  433881 cri.go:89] found id: ""
	I0408 12:51:42.838134  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.838146  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:42.838154  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:42.838223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:42.883552  433881 cri.go:89] found id: ""
	I0408 12:51:42.883589  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.883602  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:42.883615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:42.883712  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:42.922871  433881 cri.go:89] found id: ""
	I0408 12:51:42.922899  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.922910  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:42.922922  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:42.922958  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:42.979842  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:42.979885  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:42.995164  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:42.995198  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:43.075880  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:43.075906  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:43.075940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:43.164047  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:43.164113  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:45.733586  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.749054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.749158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.793132  433881 cri.go:89] found id: ""
	I0408 12:51:45.793169  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.793181  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:45.793189  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.793257  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.834562  433881 cri.go:89] found id: ""
	I0408 12:51:45.834597  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.834608  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:45.834616  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.834686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.876365  433881 cri.go:89] found id: ""
	I0408 12:51:45.876404  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.876415  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:45.876424  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.876489  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.926205  433881 cri.go:89] found id: ""
	I0408 12:51:45.926241  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.926254  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:45.926262  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.926331  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.969462  433881 cri.go:89] found id: ""
	I0408 12:51:45.969494  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.969506  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:45.969513  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.969582  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:46.011980  433881 cri.go:89] found id: ""
	I0408 12:51:46.012008  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.012031  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:46.012040  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:46.012098  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:46.054484  433881 cri.go:89] found id: ""
	I0408 12:51:46.054522  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.054533  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:46.054542  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:46.054609  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:46.094438  433881 cri.go:89] found id: ""
	I0408 12:51:46.094468  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.094477  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:46.094486  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.094503  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:46.186390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:46.186415  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.186437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.283200  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.283240  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:46.336507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.336544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.392178  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.392221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:43.429577  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:45.431057  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:47.106109  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:48.599265  433674 pod_ready.go:81] duration metric: took 4m0.000260398s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:48.599302  433674 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:51:48.599335  433674 pod_ready.go:38] duration metric: took 4m13.995684279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:48.599373  433674 kubeadm.go:591] duration metric: took 4m22.072516751s to restartPrimaryControlPlane
	W0408 12:51:48.599529  433674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:48.599619  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:45.864458  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:45.864503  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.907964  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:45.908000  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.980082  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:45.980123  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:46.041294  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:46.041330  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:46.102117  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:46.102171  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:46.188553  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:46.188583  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:46.234191  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:46.234229  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:46.281240  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.281273  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.721047  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.721092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.781387  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.781429  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:46.797003  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.797043  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:46.917073  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.917109  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:49.481948  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:51:49.488261  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:51:49.489694  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:51:49.489726  433557 api_server.go:131] duration metric: took 4.075047023s to wait for apiserver health ...
	I0408 12:51:49.489737  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:51:49.489772  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:49.489845  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:49.535955  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.535980  433557 cri.go:89] found id: ""
	I0408 12:51:49.535990  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:49.536052  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.543143  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:49.543239  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.590041  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:49.590075  433557 cri.go:89] found id: ""
	I0408 12:51:49.590087  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:49.590155  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.595726  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.595803  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.645009  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:49.645046  433557 cri.go:89] found id: ""
	I0408 12:51:49.645057  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:49.645110  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.650243  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.650329  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.693859  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:49.693882  433557 cri.go:89] found id: ""
	I0408 12:51:49.693895  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:49.693972  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.699620  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.699709  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.755614  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:49.755646  433557 cri.go:89] found id: ""
	I0408 12:51:49.755657  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:49.755739  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.761838  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.761913  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.808919  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:49.808950  433557 cri.go:89] found id: ""
	I0408 12:51:49.808961  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:49.809040  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.813965  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.814046  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.859700  433557 cri.go:89] found id: ""
	I0408 12:51:49.859737  433557 logs.go:276] 0 containers: []
	W0408 12:51:49.859748  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.859757  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:49.859832  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:49.908020  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:49.908044  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:49.908050  433557 cri.go:89] found id: ""
	I0408 12:51:49.908060  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:49.908129  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.913034  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.919193  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:49.919233  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.984657  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.984704  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:50.003487  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:50.003526  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:50.139417  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:50.139481  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:50.240166  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:50.240206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:50.288776  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:50.288823  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:50.339222  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:50.339252  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:50.402263  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:50.402308  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:50.461894  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:50.461946  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:50.507329  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:50.507373  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:50.576851  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:50.576894  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:48.908956  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:48.932321  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:48.932414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:48.988509  433881 cri.go:89] found id: ""
	I0408 12:51:48.988542  433881 logs.go:276] 0 containers: []
	W0408 12:51:48.988554  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:48.988563  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:48.988632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.026573  433881 cri.go:89] found id: ""
	I0408 12:51:49.026605  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.026613  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:49.026618  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.026681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.072625  433881 cri.go:89] found id: ""
	I0408 12:51:49.072661  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.072675  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:49.072684  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.072748  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.120630  433881 cri.go:89] found id: ""
	I0408 12:51:49.120662  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.120674  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:49.120683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.120743  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.169189  433881 cri.go:89] found id: ""
	I0408 12:51:49.169218  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.169231  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:49.169239  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.169307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.216077  433881 cri.go:89] found id: ""
	I0408 12:51:49.216115  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.216128  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:49.216141  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.216209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.258519  433881 cri.go:89] found id: ""
	I0408 12:51:49.258556  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.258568  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.258576  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:49.258658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:49.298058  433881 cri.go:89] found id: ""
	I0408 12:51:49.298092  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.298103  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:49.298117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:49.298133  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:49.351961  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.352020  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:49.369774  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:49.369822  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:49.465570  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:49.465598  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:49.465616  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:49.551701  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:49.551753  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:47.932221  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.430702  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.947824  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:50.947878  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:51.007034  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:51.007084  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:53.563768  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:51:53.563811  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.563818  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.563824  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.563829  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.563835  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.563840  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.563850  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.563857  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.563870  433557 system_pods.go:74] duration metric: took 4.074125222s to wait for pod list to return data ...
	I0408 12:51:53.563884  433557 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:51:53.566991  433557 default_sa.go:45] found service account: "default"
	I0408 12:51:53.567015  433557 default_sa.go:55] duration metric: took 3.122873ms for default service account to be created ...
	I0408 12:51:53.567024  433557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:51:53.574517  433557 system_pods.go:86] 8 kube-system pods found
	I0408 12:51:53.574558  433557 system_pods.go:89] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.574565  433557 system_pods.go:89] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.574570  433557 system_pods.go:89] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.574575  433557 system_pods.go:89] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.574581  433557 system_pods.go:89] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.574587  433557 system_pods.go:89] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.574598  433557 system_pods.go:89] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.574605  433557 system_pods.go:89] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.574616  433557 system_pods.go:126] duration metric: took 7.585497ms to wait for k8s-apps to be running ...
	I0408 12:51:53.574629  433557 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:51:53.574720  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:53.597605  433557 system_svc.go:56] duration metric: took 22.957663ms WaitForService to wait for kubelet
	I0408 12:51:53.597658  433557 kubeadm.go:576] duration metric: took 4m22.91172229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:51:53.597683  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:51:53.601940  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:51:53.601992  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:51:53.602009  433557 node_conditions.go:105] duration metric: took 4.320913ms to run NodePressure ...
	I0408 12:51:53.602028  433557 start.go:240] waiting for startup goroutines ...
	I0408 12:51:53.602040  433557 start.go:245] waiting for cluster config update ...
	I0408 12:51:53.602060  433557 start.go:254] writing updated cluster config ...
	I0408 12:51:53.602426  433557 ssh_runner.go:195] Run: rm -f paused
	I0408 12:51:53.660257  433557 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0408 12:51:53.662533  433557 out.go:177] * Done! kubectl is now configured to use "no-preload-135234" cluster and "default" namespace by default
	I0408 12:51:52.104186  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:52.125116  433881 kubeadm.go:591] duration metric: took 4m3.004969382s to restartPrimaryControlPlane
	W0408 12:51:52.125203  433881 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:52.125233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:54.046318  433881 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.921055247s)
	I0408 12:51:54.046411  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:54.061948  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:51:54.073014  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:51:54.083545  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:51:54.083566  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:51:54.083623  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:51:54.093457  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:51:54.093541  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:51:54.104924  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:51:54.114649  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:51:54.114733  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:51:54.125143  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.135209  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:51:54.135283  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.146586  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:51:54.157676  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:51:54.157740  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:51:54.168585  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:51:54.411949  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:51:52.434513  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:54.930343  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:57.432046  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:59.436031  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:01.930142  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:03.931249  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:06.429806  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:08.929311  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:10.929707  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:13.430287  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:15.430449  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:17.933664  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:20.428983  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:21.300307  433674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.700649463s)
	I0408 12:52:21.300429  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:21.321628  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:21.334359  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:21.345697  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:21.345755  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:21.345804  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:52:21.356798  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:21.356868  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:21.368622  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:52:21.379589  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:21.379676  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:21.391211  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.401783  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:21.401874  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.413655  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:52:21.424585  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:21.424673  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:21.436887  433674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:21.495891  433674 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:21.496022  433674 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:21.667820  433674 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:21.667973  433674 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:21.668100  433674 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:21.904532  433674 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:21.906631  433674 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:21.906736  433674 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:21.906833  433674 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:21.906962  433674 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:21.907084  433674 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:21.907206  433674 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:21.907283  433674 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:21.907372  433674 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:21.907705  433674 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:21.908164  433674 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:21.908536  433674 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:21.908852  433674 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:21.908942  433674 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:22.096319  433674 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:22.286425  433674 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:22.442534  433674 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:22.542901  433674 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:22.959098  433674 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:22.959656  433674 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:22.962359  433674 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:22.965011  433674 out.go:204]   - Booting up control plane ...
	I0408 12:52:22.965148  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:22.965830  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:22.966718  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:22.987425  433674 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:22.988618  433674 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:22.988690  433674 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:23.134634  433674 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:52:22.429735  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.431237  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.923026  433439 pod_ready.go:81] duration metric: took 4m0.000804438s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	E0408 12:52:24.923079  433439 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:52:24.923103  433439 pod_ready.go:38] duration metric: took 4m6.498748448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:24.923143  433439 kubeadm.go:591] duration metric: took 4m14.484131334s to restartPrimaryControlPlane
	W0408 12:52:24.923222  433439 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:52:24.923260  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:52:29.641484  433674 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.505486 seconds
	I0408 12:52:29.659612  433674 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:52:29.683882  433674 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:52:30.237806  433674 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:52:30.238135  433674 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-488947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:52:30.755095  433674 kubeadm.go:309] [bootstrap-token] Using token: kwhj7g.e2hm9yupaxknooep
	I0408 12:52:30.756904  433674 out.go:204]   - Configuring RBAC rules ...
	I0408 12:52:30.757044  433674 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:52:30.763322  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:52:30.776489  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:52:30.780180  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:52:30.784949  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:52:30.789409  433674 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:52:30.810228  433674 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:52:31.071672  433674 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:52:31.180390  433674 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:52:31.180421  433674 kubeadm.go:309] 
	I0408 12:52:31.180493  433674 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:52:31.180504  433674 kubeadm.go:309] 
	I0408 12:52:31.180626  433674 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:52:31.180652  433674 kubeadm.go:309] 
	I0408 12:52:31.180682  433674 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:52:31.180758  433674 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:52:31.180823  433674 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:52:31.180835  433674 kubeadm.go:309] 
	I0408 12:52:31.180898  433674 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:52:31.180908  433674 kubeadm.go:309] 
	I0408 12:52:31.180967  433674 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:52:31.180978  433674 kubeadm.go:309] 
	I0408 12:52:31.181069  433674 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:52:31.181200  433674 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:52:31.181301  433674 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:52:31.181312  433674 kubeadm.go:309] 
	I0408 12:52:31.181446  433674 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:52:31.181564  433674 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:52:31.181577  433674 kubeadm.go:309] 
	I0408 12:52:31.181706  433674 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.181869  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:52:31.181923  433674 kubeadm.go:309] 	--control-plane 
	I0408 12:52:31.181933  433674 kubeadm.go:309] 
	I0408 12:52:31.182039  433674 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:52:31.182055  433674 kubeadm.go:309] 
	I0408 12:52:31.182167  433674 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.182323  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:52:31.182467  433674 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:52:31.182492  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:52:31.182502  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:52:31.184299  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:52:31.185716  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:52:31.217708  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:52:31.277627  433674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:52:31.277716  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:31.277740  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-488947 minikube.k8s.io/updated_at=2024_04_08T12_52_31_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=embed-certs-488947 minikube.k8s.io/primary=true
	I0408 12:52:31.591490  433674 ops.go:34] apiserver oom_adj: -16
	I0408 12:52:31.591651  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.092642  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.591845  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.092645  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.592585  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.092066  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.592232  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.091882  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.591794  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.091849  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.592616  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.091816  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.091756  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.592114  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.092524  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.591838  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.091853  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.591747  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.092421  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.592611  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.092369  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.092638  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.592549  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.091831  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.592358  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.799776  433674 kubeadm.go:1107] duration metric: took 13.522136387s to wait for elevateKubeSystemPrivileges
	W0408 12:52:44.799833  433674 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:52:44.799845  433674 kubeadm.go:393] duration metric: took 5m18.325910079s to StartCluster
	I0408 12:52:44.799870  433674 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.799981  433674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:52:44.802396  433674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.802704  433674 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:52:44.804525  433674 out.go:177] * Verifying Kubernetes components...
	I0408 12:52:44.802776  433674 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:52:44.802921  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:52:44.805724  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:52:44.805735  433674 addons.go:69] Setting metrics-server=true in profile "embed-certs-488947"
	I0408 12:52:44.805751  433674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-488947"
	I0408 12:52:44.805777  433674 addons.go:234] Setting addon metrics-server=true in "embed-certs-488947"
	W0408 12:52:44.805792  433674 addons.go:243] addon metrics-server should already be in state true
	I0408 12:52:44.805824  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805727  433674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-488947"
	I0408 12:52:44.805869  433674 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-488947"
	W0408 12:52:44.805883  433674 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:52:44.805915  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805834  433674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-488947"
	I0408 12:52:44.806260  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806262  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806266  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806286  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806288  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806326  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.824170  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I0408 12:52:44.824862  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.825517  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.825547  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.826049  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.826714  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.826752  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.827345  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0408 12:52:44.827569  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0408 12:52:44.828195  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828218  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828860  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.828892  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829023  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.829040  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829499  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829541  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829687  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.830201  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.830247  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.834128  433674 addons.go:234] Setting addon default-storageclass=true in "embed-certs-488947"
	W0408 12:52:44.834156  433674 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:52:44.834189  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.834569  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.834611  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.845829  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0408 12:52:44.846556  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.847545  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.847571  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.848210  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.848478  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.850407  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.850783  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0408 12:52:44.853144  433674 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:52:44.851322  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.854214  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0408 12:52:44.855198  433674 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:44.855222  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:52:44.855245  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.855434  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.855766  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855797  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.855936  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855956  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.856190  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856264  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856382  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.856937  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.856973  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.857994  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.859623  433674 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:52:44.860991  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:52:44.861012  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:52:44.858778  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.861032  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.861051  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.861072  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.859293  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.861282  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.861617  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.861817  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.863813  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864274  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.864299  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864483  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.864681  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.864846  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.865028  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.874355  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0408 12:52:44.874834  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.875388  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.875418  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.875775  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.875967  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.877519  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.877786  433674 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:44.877803  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:52:44.877818  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.880463  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.880846  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.880874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.881040  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.881234  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.881615  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.881753  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:45.057304  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:52:45.082702  433674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.091955  433674 node_ready.go:49] node "embed-certs-488947" has status "Ready":"True"
	I0408 12:52:45.091994  433674 node_ready.go:38] duration metric: took 9.246027ms for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.092007  433674 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:45.101654  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:45.237037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:52:45.237068  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:52:45.238421  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:45.274088  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:45.295037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:52:45.295078  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:52:45.397474  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:45.397504  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:52:45.431610  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:46.375681  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101541881s)
	I0408 12:52:46.375827  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.375862  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376204  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.376244  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.137166571s)
	I0408 12:52:46.376271  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.376291  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.376309  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376313  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376319  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.377184  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377205  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377613  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.377680  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377699  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377709  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.377747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.378168  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.378182  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.413325  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.413361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.413757  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.413780  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.679538  433674 pod_ready.go:92] pod "coredns-76f75df574-4gdp4" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.679577  433674 pod_ready.go:81] duration metric: took 1.577895468s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.679596  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760007  433674 pod_ready.go:92] pod "coredns-76f75df574-r5rxq" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.760043  433674 pod_ready.go:81] duration metric: took 80.437752ms for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760059  433674 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.803070  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.371401052s)
	I0408 12:52:46.803136  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803150  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803496  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803519  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803530  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803539  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803846  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803862  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803882  433674 addons.go:470] Verifying addon metrics-server=true in "embed-certs-488947"
	I0408 12:52:46.806034  433674 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0408 12:52:46.804164  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.807597  433674 pod_ready.go:81] duration metric: took 47.521367ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807622  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807621  433674 addons.go:505] duration metric: took 2.004847213s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0408 12:52:46.827049  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.827075  433674 pod_ready.go:81] duration metric: took 19.440746ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.827086  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848718  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.848759  433674 pod_ready.go:81] duration metric: took 21.664037ms for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848775  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087350  433674 pod_ready.go:92] pod "kube-proxy-mqrtp" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.087387  433674 pod_ready.go:81] duration metric: took 238.602902ms for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087403  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486822  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.486863  433674 pod_ready.go:81] duration metric: took 399.44977ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486875  433674 pod_ready.go:38] duration metric: took 2.394853452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:47.486895  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:52:47.486967  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:52:47.517426  433674 api_server.go:72] duration metric: took 2.714672176s to wait for apiserver process to appear ...
	I0408 12:52:47.517461  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:52:47.517492  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:52:47.527074  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:52:47.528230  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:52:47.528285  433674 api_server.go:131] duration metric: took 10.815426ms to wait for apiserver health ...
	I0408 12:52:47.528296  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:52:47.692054  433674 system_pods.go:59] 9 kube-system pods found
	I0408 12:52:47.692091  433674 system_pods.go:61] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:47.692096  433674 system_pods.go:61] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:47.692101  433674 system_pods.go:61] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:47.692105  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:47.692109  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:47.692112  433674 system_pods.go:61] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:47.692116  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:47.692123  433674 system_pods.go:61] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:47.692129  433674 system_pods.go:61] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:47.692137  433674 system_pods.go:74] duration metric: took 163.833038ms to wait for pod list to return data ...
	I0408 12:52:47.692146  433674 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:52:47.886668  433674 default_sa.go:45] found service account: "default"
	I0408 12:52:47.886695  433674 default_sa.go:55] duration metric: took 194.543392ms for default service account to be created ...
	I0408 12:52:47.886707  433674 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:52:48.090174  433674 system_pods.go:86] 9 kube-system pods found
	I0408 12:52:48.090212  433674 system_pods.go:89] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:48.090217  433674 system_pods.go:89] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:48.090222  433674 system_pods.go:89] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:48.090226  433674 system_pods.go:89] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:48.090232  433674 system_pods.go:89] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:48.090236  433674 system_pods.go:89] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:48.090240  433674 system_pods.go:89] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:48.090248  433674 system_pods.go:89] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:48.090253  433674 system_pods.go:89] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:48.090260  433674 system_pods.go:126] duration metric: took 203.547421ms to wait for k8s-apps to be running ...
	I0408 12:52:48.090266  433674 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:52:48.090312  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:48.106285  433674 system_svc.go:56] duration metric: took 15.998172ms WaitForService to wait for kubelet
	I0408 12:52:48.106322  433674 kubeadm.go:576] duration metric: took 3.303579521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:52:48.106345  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:52:48.287351  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:52:48.287381  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:52:48.287392  433674 node_conditions.go:105] duration metric: took 181.042972ms to run NodePressure ...
	I0408 12:52:48.287403  433674 start.go:240] waiting for startup goroutines ...
	I0408 12:52:48.287410  433674 start.go:245] waiting for cluster config update ...
	I0408 12:52:48.287419  433674 start.go:254] writing updated cluster config ...
	I0408 12:52:48.287738  433674 ssh_runner.go:195] Run: rm -f paused
	I0408 12:52:48.341532  433674 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:52:48.343890  433674 out.go:177] * Done! kubectl is now configured to use "embed-certs-488947" cluster and "default" namespace by default
	I0408 12:52:57.475303  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.552015668s)
	I0408 12:52:57.475390  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:57.492800  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:57.507211  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:57.520174  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:57.520203  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:57.520267  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:52:57.531854  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:57.531939  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:57.543764  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:52:57.555407  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:57.555479  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:57.569452  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.580478  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:57.580575  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.591819  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:52:57.602496  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:57.602589  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:57.613811  433439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:57.669998  433439 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:57.670137  433439 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:57.830674  433439 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:57.830802  433439 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:57.830882  433439 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:58.090382  433439 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:58.092626  433439 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:58.092733  433439 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:58.092809  433439 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:58.092906  433439 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:58.093027  433439 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:58.093130  433439 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:58.093202  433439 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:58.093547  433439 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:58.093941  433439 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:58.094342  433439 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:58.094708  433439 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:58.095077  433439 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:58.095159  433439 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:58.328890  433439 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:58.516475  433439 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:58.830765  433439 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:59.052737  433439 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:59.306668  433439 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:59.307305  433439 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:59.312102  433439 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:59.314983  433439 out.go:204]   - Booting up control plane ...
	I0408 12:52:59.315104  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:59.315191  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:59.315305  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:59.334624  433439 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:59.335637  433439 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:59.335713  433439 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:59.486408  433439 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:05.490227  433439 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002996 seconds
	I0408 12:53:05.526221  433439 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:53:05.553758  433439 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:53:06.101116  433439 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:53:06.101340  433439 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-527454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:53:06.616939  433439 kubeadm.go:309] [bootstrap-token] Using token: oe56hb.uz3a0dd96vnry1w3
	I0408 12:53:06.618840  433439 out.go:204]   - Configuring RBAC rules ...
	I0408 12:53:06.619038  433439 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:53:06.625364  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:53:06.638696  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:53:06.643811  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:53:06.647895  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:53:06.651857  433439 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:53:06.677056  433439 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:53:06.939588  433439 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:53:07.038633  433439 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:53:07.041464  433439 kubeadm.go:309] 
	I0408 12:53:07.041565  433439 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:53:07.041578  433439 kubeadm.go:309] 
	I0408 12:53:07.041680  433439 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:53:07.041699  433439 kubeadm.go:309] 
	I0408 12:53:07.041723  433439 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:53:07.041824  433439 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:53:07.041906  433439 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:53:07.041917  433439 kubeadm.go:309] 
	I0408 12:53:07.041988  433439 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:53:07.041998  433439 kubeadm.go:309] 
	I0408 12:53:07.042103  433439 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:53:07.042123  433439 kubeadm.go:309] 
	I0408 12:53:07.042168  433439 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:53:07.042253  433439 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:53:07.042351  433439 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:53:07.042361  433439 kubeadm.go:309] 
	I0408 12:53:07.042588  433439 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:53:07.042708  433439 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:53:07.042719  433439 kubeadm.go:309] 
	I0408 12:53:07.042823  433439 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.042959  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:53:07.042994  433439 kubeadm.go:309] 	--control-plane 
	I0408 12:53:07.043003  433439 kubeadm.go:309] 
	I0408 12:53:07.043127  433439 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:53:07.043143  433439 kubeadm.go:309] 
	I0408 12:53:07.043253  433439 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.043400  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:53:07.043583  433439 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:53:07.043608  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:53:07.043620  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:53:07.045283  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:53:07.046614  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:53:07.074907  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:53:07.107168  433439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:53:07.107232  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.107256  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-527454 minikube.k8s.io/updated_at=2024_04_08T12_53_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=default-k8s-diff-port-527454 minikube.k8s.io/primary=true
	I0408 12:53:07.208551  433439 ops.go:34] apiserver oom_adj: -16
	I0408 12:53:07.395206  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.896090  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.396097  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.896240  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.395654  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.895751  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.396242  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.896204  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.395766  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.895555  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.396014  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.896092  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.395507  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.895832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.395237  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.895333  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.396191  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.895561  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.395832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.895785  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.395460  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.895320  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.395826  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.896002  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.396326  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.514796  433439 kubeadm.go:1107] duration metric: took 12.407623504s to wait for elevateKubeSystemPrivileges
	W0408 12:53:19.514843  433439 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:53:19.514856  433439 kubeadm.go:393] duration metric: took 5m9.134867072s to StartCluster
	I0408 12:53:19.514882  433439 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.514981  433439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:53:19.516708  433439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.516988  433439 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:53:19.518597  433439 out.go:177] * Verifying Kubernetes components...
	I0408 12:53:19.517057  433439 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:53:19.517238  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:53:19.519990  433439 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520011  433439 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:19.520003  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0408 12:53:19.520052  433439 addons.go:243] addon metrics-server should already be in state true
	I0408 12:53:19.520095  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.519995  433439 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520161  433439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.520247  433439 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:53:19.520274  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.520519  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520521  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520555  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520616  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520639  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520556  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.536637  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0408 12:53:19.536896  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0408 12:53:19.536997  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0408 12:53:19.537194  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537369  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537453  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537748  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537772  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.537883  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537895  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538210  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538262  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538352  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.538372  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538815  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.538818  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538875  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.539030  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.542211  433439 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.542228  433439 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:53:19.542252  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.542841  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.542871  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.556920  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0408 12:53:19.557552  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0408 12:53:19.557712  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.557930  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.558468  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.558482  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.559174  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.559474  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.559852  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.559881  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.560358  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.561323  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.561357  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.561606  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.563808  433439 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:53:19.565205  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:53:19.565224  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:53:19.565252  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.565914  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0408 12:53:19.566710  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.567503  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.567521  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.568270  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.568656  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.568664  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.569109  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.569136  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.569294  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.569451  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.569707  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.569894  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.570455  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.572243  433439 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:53:19.573764  433439 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:19.573784  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:53:19.573804  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.576844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577310  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.577380  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577547  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.577851  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.578009  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.578154  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.579402  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0408 12:53:19.579860  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.580428  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.580448  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.581001  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.581202  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.582638  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.582913  433439 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:19.582929  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:53:19.582949  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.585995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586456  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.586488  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586665  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.586845  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.586974  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.587077  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.782606  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:53:19.822413  433439 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833467  433439 node_ready.go:49] node "default-k8s-diff-port-527454" has status "Ready":"True"
	I0408 12:53:19.833493  433439 node_ready.go:38] duration metric: took 11.040127ms for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833503  433439 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:19.845052  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:19.990826  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:20.027800  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:53:20.027827  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:53:20.066661  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:20.168240  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:53:20.168271  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:53:20.327307  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.327336  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:53:20.390128  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.455235  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455265  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455575  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455607  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.455618  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455628  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455912  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455929  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.494751  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.494778  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.495103  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.495126  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.495132  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.454862  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.388156991s)
	I0408 12:53:21.454942  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.454956  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455313  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.455368  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455377  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455386  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.455395  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455729  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455753  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455797  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.591677  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.201496165s)
	I0408 12:53:21.591745  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.591760  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.592145  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592183  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592199  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.592214  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592484  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592501  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592513  433439 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:21.594462  433439 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0408 12:53:21.595731  433439 addons.go:505] duration metric: took 2.078676652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0408 12:53:21.852741  433439 pod_ready.go:102] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"False"
	I0408 12:53:22.375241  433439 pod_ready.go:92] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.375283  433439 pod_ready.go:81] duration metric: took 2.53020032s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.375298  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.391968  433439 pod_ready.go:92] pod "coredns-76f75df574-z56lf" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.392003  433439 pod_ready.go:81] duration metric: took 16.695581ms for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.392018  433439 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398659  433439 pod_ready.go:92] pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.398688  433439 pod_ready.go:81] duration metric: took 6.657546ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398699  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407214  433439 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.407241  433439 pod_ready.go:81] duration metric: took 8.535246ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407252  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416605  433439 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.416632  433439 pod_ready.go:81] duration metric: took 9.374648ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416644  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750191  433439 pod_ready.go:92] pod "kube-proxy-tlhff" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.750220  433439 pod_ready.go:81] duration metric: took 333.570363ms for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750231  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.148980  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:23.149009  433439 pod_ready.go:81] duration metric: took 398.771226ms for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.149018  433439 pod_ready.go:38] duration metric: took 3.315505787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:23.149034  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:53:23.149087  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:53:23.165120  433439 api_server.go:72] duration metric: took 3.648094543s to wait for apiserver process to appear ...
	I0408 12:53:23.165149  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:53:23.165170  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:53:23.171016  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:53:23.172486  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:53:23.172510  433439 api_server.go:131] duration metric: took 7.354957ms to wait for apiserver health ...
	I0408 12:53:23.172518  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:53:23.353807  433439 system_pods.go:59] 9 kube-system pods found
	I0408 12:53:23.353846  433439 system_pods.go:61] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.353853  433439 system_pods.go:61] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.353859  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.353866  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.353874  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.353879  433439 system_pods.go:61] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.353883  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.353890  433439 system_pods.go:61] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.353896  433439 system_pods.go:61] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.353911  433439 system_pods.go:74] duration metric: took 181.386053ms to wait for pod list to return data ...
	I0408 12:53:23.353923  433439 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:53:23.549663  433439 default_sa.go:45] found service account: "default"
	I0408 12:53:23.549702  433439 default_sa.go:55] duration metric: took 195.766529ms for default service account to be created ...
	I0408 12:53:23.549717  433439 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:53:23.755668  433439 system_pods.go:86] 9 kube-system pods found
	I0408 12:53:23.755729  433439 system_pods.go:89] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.755739  433439 system_pods.go:89] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.755748  433439 system_pods.go:89] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.755755  433439 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.755761  433439 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.755768  433439 system_pods.go:89] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.755774  433439 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.755787  433439 system_pods.go:89] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.755792  433439 system_pods.go:89] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.755805  433439 system_pods.go:126] duration metric: took 206.081481ms to wait for k8s-apps to be running ...
	I0408 12:53:23.755814  433439 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:53:23.755866  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:23.774910  433439 system_svc.go:56] duration metric: took 19.080727ms WaitForService to wait for kubelet
	I0408 12:53:23.774954  433439 kubeadm.go:576] duration metric: took 4.257931558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:53:23.774985  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:53:23.949588  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:53:23.949618  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:53:23.949630  433439 node_conditions.go:105] duration metric: took 174.638826ms to run NodePressure ...
	I0408 12:53:23.949642  433439 start.go:240] waiting for startup goroutines ...
	I0408 12:53:23.949649  433439 start.go:245] waiting for cluster config update ...
	I0408 12:53:23.949659  433439 start.go:254] writing updated cluster config ...
	I0408 12:53:23.949929  433439 ssh_runner.go:195] Run: rm -f paused
	I0408 12:53:24.004633  433439 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:53:24.007640  433439 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-527454" cluster and "default" namespace by default
	I0408 12:53:50.506496  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:53:50.506736  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:53:50.508871  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:50.508975  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:50.509090  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:50.509248  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:50.509435  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:50.509546  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:50.511505  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:50.511616  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:50.511727  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:50.511838  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:50.511925  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:50.512024  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:50.512112  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:50.512228  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:50.512332  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:50.512442  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:50.512551  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:50.512608  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:50.512661  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:50.512714  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:50.512784  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:50.512866  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:50.512934  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:50.513078  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:50.513228  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:50.513285  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:50.513383  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:50.515207  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:50.515297  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:50.515380  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:50.515449  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:50.515522  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:50.515668  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:50.515756  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:53:50.515843  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516036  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516118  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516346  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516428  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516675  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516747  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516990  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517092  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.517336  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517352  433881 kubeadm.go:309] 
	I0408 12:53:50.517402  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:53:50.517453  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:53:50.517463  433881 kubeadm.go:309] 
	I0408 12:53:50.517517  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:53:50.517572  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:53:50.517743  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:53:50.517757  433881 kubeadm.go:309] 
	I0408 12:53:50.517898  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:53:50.517949  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:53:50.517999  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:53:50.518014  433881 kubeadm.go:309] 
	I0408 12:53:50.518163  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:53:50.518286  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:53:50.518297  433881 kubeadm.go:309] 
	I0408 12:53:50.518448  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:53:50.518581  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:53:50.518686  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:53:50.518747  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:53:50.518781  433881 kubeadm.go:309] 
	W0408 12:53:50.518884  433881 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:53:50.518933  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:53:50.995302  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:51.011982  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:53:51.022491  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:53:51.022512  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:53:51.022565  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:53:51.032994  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:53:51.033071  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:53:51.043529  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:53:51.053500  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:53:51.053580  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:53:51.063658  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.073397  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:53:51.073464  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.085243  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:53:51.095094  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:53:51.095165  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:53:51.105549  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:53:51.185596  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:51.185706  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:51.349502  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:51.349661  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:51.349805  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:51.557584  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:51.559567  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:51.559701  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:51.559800  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:51.559968  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:51.560065  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:51.560159  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:51.560241  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:51.560337  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:51.560443  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:51.560561  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:51.560680  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:51.560735  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:51.560826  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:51.727630  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:51.895665  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:52.087304  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:52.187789  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:52.213627  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:52.213777  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:52.213837  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:52.384599  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:52.386843  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:52.386992  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:52.389989  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:52.393527  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:52.394471  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:52.405071  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:54:32.408240  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:54:32.408440  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:32.408738  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:37.409255  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:37.409493  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:47.409946  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:47.410234  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:07.410503  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:07.410710  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.409536  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:47.410032  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.410062  433881 kubeadm.go:309] 
	I0408 12:55:47.410118  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:55:47.410216  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:55:47.410232  433881 kubeadm.go:309] 
	I0408 12:55:47.410278  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:55:47.410341  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:55:47.410503  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:55:47.410515  433881 kubeadm.go:309] 
	I0408 12:55:47.410691  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:55:47.410768  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:55:47.410833  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:55:47.410843  433881 kubeadm.go:309] 
	I0408 12:55:47.411002  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:55:47.411092  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:55:47.411099  433881 kubeadm.go:309] 
	I0408 12:55:47.411208  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:55:47.411325  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:55:47.411415  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:55:47.411523  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:55:47.411534  433881 kubeadm.go:309] 
	I0408 12:55:47.413655  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:55:47.413779  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:55:47.413887  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:55:47.414099  433881 kubeadm.go:393] duration metric: took 7m58.347147979s to StartCluster
	I0408 12:55:47.414206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:55:47.414540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:55:47.466864  433881 cri.go:89] found id: ""
	I0408 12:55:47.466899  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.466909  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:55:47.466917  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:55:47.466999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:55:47.505562  433881 cri.go:89] found id: ""
	I0408 12:55:47.505590  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.505599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:55:47.505606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:55:47.505663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:55:47.545030  433881 cri.go:89] found id: ""
	I0408 12:55:47.545063  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.545075  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:55:47.545086  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:55:47.545158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:55:47.584650  433881 cri.go:89] found id: ""
	I0408 12:55:47.584685  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.584698  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:55:47.584707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:55:47.584775  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:55:47.624857  433881 cri.go:89] found id: ""
	I0408 12:55:47.624885  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.624893  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:55:47.624900  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:55:47.624953  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:55:47.662872  433881 cri.go:89] found id: ""
	I0408 12:55:47.662910  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.662922  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:55:47.662931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:55:47.662999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:55:47.702086  433881 cri.go:89] found id: ""
	I0408 12:55:47.702132  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.702142  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:55:47.702148  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:55:47.702198  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:55:47.754880  433881 cri.go:89] found id: ""
	I0408 12:55:47.754912  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.754922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:55:47.754932  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:55:47.754946  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:55:47.839768  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:55:47.839800  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:55:47.839817  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:55:47.947231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:55:47.947281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:55:47.997692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:55:47.997725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:55:48.050804  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:55:48.050854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0408 12:55:48.067168  433881 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:55:48.067218  433881 out.go:239] * 
	W0408 12:55:48.067277  433881 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.067305  433881 out.go:239] * 
	W0408 12:55:48.068281  433881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:55:48.072609  433881 out.go:177] 
	W0408 12:55:48.074039  433881 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.074112  433881 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:55:48.074174  433881 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:55:48.076570  433881 out.go:177] 
	
	
	==> CRI-O <==
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.644183824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581493644149481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63427045-c33f-463d-b322-57426907961c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.645000486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2be478d5-e1f5-4d7a-aa05-570f515dc3a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.645057278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2be478d5-e1f5-4d7a-aa05-570f515dc3a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.645096032Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2be478d5-e1f5-4d7a-aa05-570f515dc3a0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.679268726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c140964-c790-4130-be90-de5aea73fd59 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.679342252Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c140964-c790-4130-be90-de5aea73fd59 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.680503983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33f32e91-3b04-42c3-8152-3370976112cf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.680994238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581493680956265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33f32e91-3b04-42c3-8152-3370976112cf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.681490041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=182f3444-d946-43cd-9699-1cb17c68dfa8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.681607506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=182f3444-d946-43cd-9699-1cb17c68dfa8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.681648481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=182f3444-d946-43cd-9699-1cb17c68dfa8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.716326205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f87420e-7ae5-4c11-b1b4-ef912b552001 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.716395247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f87420e-7ae5-4c11-b1b4-ef912b552001 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.717753042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d20c15fc-5acb-41ca-a3d2-af9c73814a71 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.718126110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581493718101284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d20c15fc-5acb-41ca-a3d2-af9c73814a71 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.719006325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0f09d76-d9df-4482-b899-8fcd4313dfff name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.719066575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0f09d76-d9df-4482-b899-8fcd4313dfff name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.719096841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d0f09d76-d9df-4482-b899-8fcd4313dfff name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.757336855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74a5cdcb-dc60-4153-a5c8-4ed71e2473a2 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.757421731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74a5cdcb-dc60-4153-a5c8-4ed71e2473a2 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.759295832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed98fc2f-5ab6-4a85-9b3c-5f2ea43e0d7a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.759777056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581493759748951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed98fc2f-5ab6-4a85-9b3c-5f2ea43e0d7a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.760351880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75e398c0-e6e0-415e-9c81-317b7c98f6d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.760402548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75e398c0-e6e0-415e-9c81-317b7c98f6d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:04:53 old-k8s-version-384148 crio[654]: time="2024-04-08 13:04:53.760441182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=75e398c0-e6e0-415e-9c81-317b7c98f6d0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056085] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.848078] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.261221] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.697800] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.947628] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.074394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060792] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.180910] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.184499] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.345839] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +7.450294] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.068325] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.296156] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Apr 8 12:48] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 8 12:51] systemd-fstab-generator[4953]: Ignoring "noauto" option for root device
	[Apr 8 12:53] systemd-fstab-generator[5230]: Ignoring "noauto" option for root device
	[  +0.074857] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:04:53 up 17 min,  0 users,  load average: 0.00, 0.02, 0.05
	Linux old-k8s-version-384148 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000871180)
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]: goroutine 127 [syscall]:
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]: syscall.Syscall6(0xe8, 0xc, 0xc000c0fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000c0fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000629260, 0x0, 0x0, 0x0)
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0006488c0)
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Apr 08 13:04:50 old-k8s-version-384148 kubelet[6396]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Apr 08 13:04:50 old-k8s-version-384148 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 08 13:04:50 old-k8s-version-384148 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 08 13:04:51 old-k8s-version-384148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 08 13:04:51 old-k8s-version-384148 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 08 13:04:51 old-k8s-version-384148 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 08 13:04:51 old-k8s-version-384148 kubelet[6407]: I0408 13:04:51.444077    6407 server.go:416] Version: v1.20.0
	Apr 08 13:04:51 old-k8s-version-384148 kubelet[6407]: I0408 13:04:51.444489    6407 server.go:837] Client rotation is on, will bootstrap in background
	Apr 08 13:04:51 old-k8s-version-384148 kubelet[6407]: I0408 13:04:51.446448    6407 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 08 13:04:51 old-k8s-version-384148 kubelet[6407]: W0408 13:04:51.447379    6407 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 08 13:04:51 old-k8s-version-384148 kubelet[6407]: I0408 13:04:51.447896    6407 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 2 (264.770737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-384148" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (366.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-135234 -n no-preload-135234
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-08 13:07:03.102852055 +0000 UTC m=+6400.889534253
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-135234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-135234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.801µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-135234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-135234 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-135234 logs -n 25: (1.343205793s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl status crio                             |                              |         |                |                     |                     |
	|         | --all --full --no-pager                                |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl cat crio                                |                              |         |                |                     |                     |
	|         | --no-pager                                             |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |                |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |                |                     |                     |
	|         | \;                                                     |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo crio config                                       |                              |         |                |                     |                     |
	| delete  | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-122490 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | disable-driver-mounts-122490                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:06 UTC | 08 Apr 24 13:06 UTC |
	| start   | -p newest-cni-337169 --memory=2200 --alsologtostderr   | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:06 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 13:06:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 13:06:45.384426  440198 out.go:291] Setting OutFile to fd 1 ...
	I0408 13:06:45.384713  440198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 13:06:45.384725  440198 out.go:304] Setting ErrFile to fd 2...
	I0408 13:06:45.384732  440198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 13:06:45.384951  440198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 13:06:45.385643  440198 out.go:298] Setting JSON to false
	I0408 13:06:45.386732  440198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10149,"bootTime":1712571457,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 13:06:45.386802  440198 start.go:139] virtualization: kvm guest
	I0408 13:06:45.389445  440198 out.go:177] * [newest-cni-337169] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 13:06:45.390852  440198 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 13:06:45.392149  440198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 13:06:45.390860  440198 notify.go:220] Checking for updates...
	I0408 13:06:45.393569  440198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 13:06:45.394881  440198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 13:06:45.396141  440198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 13:06:45.397418  440198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 13:06:45.399123  440198 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 13:06:45.399267  440198 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 13:06:45.399395  440198 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 13:06:45.399552  440198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 13:06:45.437364  440198 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 13:06:45.439039  440198 start.go:297] selected driver: kvm2
	I0408 13:06:45.439062  440198 start.go:901] validating driver "kvm2" against <nil>
	I0408 13:06:45.439075  440198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 13:06:45.440113  440198 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 13:06:45.440221  440198 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 13:06:45.456679  440198 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 13:06:45.456752  440198 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0408 13:06:45.456817  440198 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0408 13:06:45.457114  440198 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 13:06:45.457201  440198 cni.go:84] Creating CNI manager for ""
	I0408 13:06:45.457219  440198 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 13:06:45.457227  440198 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 13:06:45.457298  440198 start.go:340] cluster config:
	{Name:newest-cni-337169 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-337169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 13:06:45.457409  440198 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 13:06:45.459349  440198 out.go:177] * Starting "newest-cni-337169" primary control-plane node in "newest-cni-337169" cluster
	I0408 13:06:45.460969  440198 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 13:06:45.461054  440198 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0408 13:06:45.461072  440198 cache.go:56] Caching tarball of preloaded images
	I0408 13:06:45.461194  440198 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 13:06:45.461206  440198 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0408 13:06:45.461307  440198 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/newest-cni-337169/config.json ...
	I0408 13:06:45.461326  440198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/newest-cni-337169/config.json: {Name:mkc2464c9d158104ff23fd11bdc4cc22f8d9d782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 13:06:45.461539  440198 start.go:360] acquireMachinesLock for newest-cni-337169: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 13:06:45.461584  440198 start.go:364] duration metric: took 24.474µs to acquireMachinesLock for "newest-cni-337169"
	I0408 13:06:45.461600  440198 start.go:93] Provisioning new machine with config: &{Name:newest-cni-337169 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-337169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 13:06:45.461669  440198 start.go:125] createHost starting for "" (driver="kvm2")
	I0408 13:06:45.465275  440198 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 13:06:45.465494  440198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 13:06:45.465536  440198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 13:06:45.481560  440198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0408 13:06:45.482157  440198 main.go:141] libmachine: () Calling .GetVersion
	I0408 13:06:45.482844  440198 main.go:141] libmachine: Using API Version  1
	I0408 13:06:45.482867  440198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 13:06:45.483322  440198 main.go:141] libmachine: () Calling .GetMachineName
	I0408 13:06:45.483621  440198 main.go:141] libmachine: (newest-cni-337169) Calling .GetMachineName
	I0408 13:06:45.483850  440198 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	I0408 13:06:45.484085  440198 start.go:159] libmachine.API.Create for "newest-cni-337169" (driver="kvm2")
	I0408 13:06:45.484124  440198 client.go:168] LocalClient.Create starting
	I0408 13:06:45.484169  440198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem
	I0408 13:06:45.484213  440198 main.go:141] libmachine: Decoding PEM data...
	I0408 13:06:45.484234  440198 main.go:141] libmachine: Parsing certificate...
	I0408 13:06:45.484311  440198 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem
	I0408 13:06:45.484342  440198 main.go:141] libmachine: Decoding PEM data...
	I0408 13:06:45.484356  440198 main.go:141] libmachine: Parsing certificate...
	I0408 13:06:45.484380  440198 main.go:141] libmachine: Running pre-create checks...
	I0408 13:06:45.484395  440198 main.go:141] libmachine: (newest-cni-337169) Calling .PreCreateCheck
	I0408 13:06:45.484874  440198 main.go:141] libmachine: (newest-cni-337169) Calling .GetConfigRaw
	I0408 13:06:45.485776  440198 main.go:141] libmachine: Creating machine...
	I0408 13:06:45.485797  440198 main.go:141] libmachine: (newest-cni-337169) Calling .Create
	I0408 13:06:45.487118  440198 main.go:141] libmachine: (newest-cni-337169) Creating KVM machine...
	I0408 13:06:45.488502  440198 main.go:141] libmachine: (newest-cni-337169) DBG | found existing default KVM network
	I0408 13:06:45.490312  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:45.490127  440221 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfa0}
	I0408 13:06:45.490403  440198 main.go:141] libmachine: (newest-cni-337169) DBG | created network xml: 
	I0408 13:06:45.490429  440198 main.go:141] libmachine: (newest-cni-337169) DBG | <network>
	I0408 13:06:45.490443  440198 main.go:141] libmachine: (newest-cni-337169) DBG |   <name>mk-newest-cni-337169</name>
	I0408 13:06:45.490454  440198 main.go:141] libmachine: (newest-cni-337169) DBG |   <dns enable='no'/>
	I0408 13:06:45.490463  440198 main.go:141] libmachine: (newest-cni-337169) DBG |   
	I0408 13:06:45.490472  440198 main.go:141] libmachine: (newest-cni-337169) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0408 13:06:45.490485  440198 main.go:141] libmachine: (newest-cni-337169) DBG |     <dhcp>
	I0408 13:06:45.490496  440198 main.go:141] libmachine: (newest-cni-337169) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0408 13:06:45.490507  440198 main.go:141] libmachine: (newest-cni-337169) DBG |     </dhcp>
	I0408 13:06:45.490517  440198 main.go:141] libmachine: (newest-cni-337169) DBG |   </ip>
	I0408 13:06:45.490522  440198 main.go:141] libmachine: (newest-cni-337169) DBG |   
	I0408 13:06:45.490529  440198 main.go:141] libmachine: (newest-cni-337169) DBG | </network>
	I0408 13:06:45.490588  440198 main.go:141] libmachine: (newest-cni-337169) DBG | 
	I0408 13:06:45.496400  440198 main.go:141] libmachine: (newest-cni-337169) DBG | trying to create private KVM network mk-newest-cni-337169 192.168.39.0/24...
	I0408 13:06:45.576007  440198 main.go:141] libmachine: (newest-cni-337169) DBG | private KVM network mk-newest-cni-337169 192.168.39.0/24 created
	I0408 13:06:45.576159  440198 main.go:141] libmachine: (newest-cni-337169) Setting up store path in /home/jenkins/minikube-integration/18588-368424/.minikube/machines/newest-cni-337169 ...
	I0408 13:06:45.576189  440198 main.go:141] libmachine: (newest-cni-337169) Building disk image from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 13:06:45.576241  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:45.576098  440221 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 13:06:45.576464  440198 main.go:141] libmachine: (newest-cni-337169) Downloading /home/jenkins/minikube-integration/18588-368424/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso...
	I0408 13:06:45.845661  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:45.845452  440221 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/newest-cni-337169/id_rsa...
	I0408 13:06:45.944203  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:45.944055  440221 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/newest-cni-337169/newest-cni-337169.rawdisk...
	I0408 13:06:45.944241  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Writing magic tar header
	I0408 13:06:45.944263  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Writing SSH key tar header
	I0408 13:06:45.944277  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:45.944234  440221 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/newest-cni-337169 ...
	I0408 13:06:45.944372  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/newest-cni-337169
	I0408 13:06:45.944406  440198 main.go:141] libmachine: (newest-cni-337169) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines/newest-cni-337169 (perms=drwx------)
	I0408 13:06:45.944418  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube/machines
	I0408 13:06:45.944438  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 13:06:45.944449  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18588-368424
	I0408 13:06:45.944457  440198 main.go:141] libmachine: (newest-cni-337169) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube/machines (perms=drwxr-xr-x)
	I0408 13:06:45.944466  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0408 13:06:45.944481  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Checking permissions on dir: /home/jenkins
	I0408 13:06:45.944492  440198 main.go:141] libmachine: (newest-cni-337169) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424/.minikube (perms=drwxr-xr-x)
	I0408 13:06:45.944501  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Checking permissions on dir: /home
	I0408 13:06:45.944519  440198 main.go:141] libmachine: (newest-cni-337169) DBG | Skipping /home - not owner
	I0408 13:06:45.944538  440198 main.go:141] libmachine: (newest-cni-337169) Setting executable bit set on /home/jenkins/minikube-integration/18588-368424 (perms=drwxrwxr-x)
	I0408 13:06:45.944548  440198 main.go:141] libmachine: (newest-cni-337169) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0408 13:06:45.944577  440198 main.go:141] libmachine: (newest-cni-337169) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0408 13:06:45.944594  440198 main.go:141] libmachine: (newest-cni-337169) Creating domain...
	I0408 13:06:45.945782  440198 main.go:141] libmachine: (newest-cni-337169) define libvirt domain using xml: 
	I0408 13:06:45.945810  440198 main.go:141] libmachine: (newest-cni-337169) <domain type='kvm'>
	I0408 13:06:45.945822  440198 main.go:141] libmachine: (newest-cni-337169)   <name>newest-cni-337169</name>
	I0408 13:06:45.945831  440198 main.go:141] libmachine: (newest-cni-337169)   <memory unit='MiB'>2200</memory>
	I0408 13:06:45.945840  440198 main.go:141] libmachine: (newest-cni-337169)   <vcpu>2</vcpu>
	I0408 13:06:45.945847  440198 main.go:141] libmachine: (newest-cni-337169)   <features>
	I0408 13:06:45.945855  440198 main.go:141] libmachine: (newest-cni-337169)     <acpi/>
	I0408 13:06:45.945863  440198 main.go:141] libmachine: (newest-cni-337169)     <apic/>
	I0408 13:06:45.945871  440198 main.go:141] libmachine: (newest-cni-337169)     <pae/>
	I0408 13:06:45.945901  440198 main.go:141] libmachine: (newest-cni-337169)     
	I0408 13:06:45.945912  440198 main.go:141] libmachine: (newest-cni-337169)   </features>
	I0408 13:06:45.945924  440198 main.go:141] libmachine: (newest-cni-337169)   <cpu mode='host-passthrough'>
	I0408 13:06:45.945934  440198 main.go:141] libmachine: (newest-cni-337169)   
	I0408 13:06:45.945943  440198 main.go:141] libmachine: (newest-cni-337169)   </cpu>
	I0408 13:06:45.945959  440198 main.go:141] libmachine: (newest-cni-337169)   <os>
	I0408 13:06:45.945972  440198 main.go:141] libmachine: (newest-cni-337169)     <type>hvm</type>
	I0408 13:06:45.945986  440198 main.go:141] libmachine: (newest-cni-337169)     <boot dev='cdrom'/>
	I0408 13:06:45.945995  440198 main.go:141] libmachine: (newest-cni-337169)     <boot dev='hd'/>
	I0408 13:06:45.946003  440198 main.go:141] libmachine: (newest-cni-337169)     <bootmenu enable='no'/>
	I0408 13:06:45.946011  440198 main.go:141] libmachine: (newest-cni-337169)   </os>
	I0408 13:06:45.946021  440198 main.go:141] libmachine: (newest-cni-337169)   <devices>
	I0408 13:06:45.946051  440198 main.go:141] libmachine: (newest-cni-337169)     <disk type='file' device='cdrom'>
	I0408 13:06:45.946091  440198 main.go:141] libmachine: (newest-cni-337169)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/newest-cni-337169/boot2docker.iso'/>
	I0408 13:06:45.946105  440198 main.go:141] libmachine: (newest-cni-337169)       <target dev='hdc' bus='scsi'/>
	I0408 13:06:45.946121  440198 main.go:141] libmachine: (newest-cni-337169)       <readonly/>
	I0408 13:06:45.946134  440198 main.go:141] libmachine: (newest-cni-337169)     </disk>
	I0408 13:06:45.946145  440198 main.go:141] libmachine: (newest-cni-337169)     <disk type='file' device='disk'>
	I0408 13:06:45.946164  440198 main.go:141] libmachine: (newest-cni-337169)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0408 13:06:45.946187  440198 main.go:141] libmachine: (newest-cni-337169)       <source file='/home/jenkins/minikube-integration/18588-368424/.minikube/machines/newest-cni-337169/newest-cni-337169.rawdisk'/>
	I0408 13:06:45.946201  440198 main.go:141] libmachine: (newest-cni-337169)       <target dev='hda' bus='virtio'/>
	I0408 13:06:45.946208  440198 main.go:141] libmachine: (newest-cni-337169)     </disk>
	I0408 13:06:45.946220  440198 main.go:141] libmachine: (newest-cni-337169)     <interface type='network'>
	I0408 13:06:45.946232  440198 main.go:141] libmachine: (newest-cni-337169)       <source network='mk-newest-cni-337169'/>
	I0408 13:06:45.946242  440198 main.go:141] libmachine: (newest-cni-337169)       <model type='virtio'/>
	I0408 13:06:45.946257  440198 main.go:141] libmachine: (newest-cni-337169)     </interface>
	I0408 13:06:45.946270  440198 main.go:141] libmachine: (newest-cni-337169)     <interface type='network'>
	I0408 13:06:45.946281  440198 main.go:141] libmachine: (newest-cni-337169)       <source network='default'/>
	I0408 13:06:45.946290  440198 main.go:141] libmachine: (newest-cni-337169)       <model type='virtio'/>
	I0408 13:06:45.946300  440198 main.go:141] libmachine: (newest-cni-337169)     </interface>
	I0408 13:06:45.946309  440198 main.go:141] libmachine: (newest-cni-337169)     <serial type='pty'>
	I0408 13:06:45.946319  440198 main.go:141] libmachine: (newest-cni-337169)       <target port='0'/>
	I0408 13:06:45.946349  440198 main.go:141] libmachine: (newest-cni-337169)     </serial>
	I0408 13:06:45.946372  440198 main.go:141] libmachine: (newest-cni-337169)     <console type='pty'>
	I0408 13:06:45.946384  440198 main.go:141] libmachine: (newest-cni-337169)       <target type='serial' port='0'/>
	I0408 13:06:45.946395  440198 main.go:141] libmachine: (newest-cni-337169)     </console>
	I0408 13:06:45.946408  440198 main.go:141] libmachine: (newest-cni-337169)     <rng model='virtio'>
	I0408 13:06:45.946417  440198 main.go:141] libmachine: (newest-cni-337169)       <backend model='random'>/dev/random</backend>
	I0408 13:06:45.946429  440198 main.go:141] libmachine: (newest-cni-337169)     </rng>
	I0408 13:06:45.946439  440198 main.go:141] libmachine: (newest-cni-337169)     
	I0408 13:06:45.946447  440198 main.go:141] libmachine: (newest-cni-337169)     
	I0408 13:06:45.946457  440198 main.go:141] libmachine: (newest-cni-337169)   </devices>
	I0408 13:06:45.946466  440198 main.go:141] libmachine: (newest-cni-337169) </domain>
	I0408 13:06:45.946476  440198 main.go:141] libmachine: (newest-cni-337169) 
	I0408 13:06:45.950785  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:03:11:aa in network default
	I0408 13:06:45.951489  440198 main.go:141] libmachine: (newest-cni-337169) Ensuring networks are active...
	I0408 13:06:45.951509  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:45.952306  440198 main.go:141] libmachine: (newest-cni-337169) Ensuring network default is active
	I0408 13:06:45.952566  440198 main.go:141] libmachine: (newest-cni-337169) Ensuring network mk-newest-cni-337169 is active
	I0408 13:06:45.953045  440198 main.go:141] libmachine: (newest-cni-337169) Getting domain xml...
	I0408 13:06:45.953787  440198 main.go:141] libmachine: (newest-cni-337169) Creating domain...
	I0408 13:06:47.233781  440198 main.go:141] libmachine: (newest-cni-337169) Waiting to get IP...
	I0408 13:06:47.234662  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:47.235154  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:47.235197  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:47.235130  440221 retry.go:31] will retry after 203.829297ms: waiting for machine to come up
	I0408 13:06:47.440915  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:47.441490  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:47.441520  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:47.441426  440221 retry.go:31] will retry after 257.232005ms: waiting for machine to come up
	I0408 13:06:47.699938  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:47.700366  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:47.700394  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:47.700324  440221 retry.go:31] will retry after 306.172564ms: waiting for machine to come up
	I0408 13:06:48.009498  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:48.010065  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:48.010116  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:48.009992  440221 retry.go:31] will retry after 582.218045ms: waiting for machine to come up
	I0408 13:06:48.593946  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:48.594419  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:48.594470  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:48.594357  440221 retry.go:31] will retry after 533.108922ms: waiting for machine to come up
	I0408 13:06:49.129120  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:49.129646  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:49.129678  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:49.129612  440221 retry.go:31] will retry after 724.395828ms: waiting for machine to come up
	I0408 13:06:49.855849  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:49.856283  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:49.856338  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:49.856243  440221 retry.go:31] will retry after 987.302935ms: waiting for machine to come up
	I0408 13:06:50.844698  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:50.845229  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:50.845260  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:50.845182  440221 retry.go:31] will retry after 1.116130534s: waiting for machine to come up
	I0408 13:06:51.963720  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:51.964240  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:51.964271  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:51.964202  440221 retry.go:31] will retry after 1.636539871s: waiting for machine to come up
	I0408 13:06:53.603304  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:53.603825  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:53.603853  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:53.603783  440221 retry.go:31] will retry after 2.03915333s: waiting for machine to come up
	I0408 13:06:55.644487  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:55.645020  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:55.645052  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:55.644971  440221 retry.go:31] will retry after 2.116933983s: waiting for machine to come up
	I0408 13:06:57.763343  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:06:57.763934  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:06:57.763961  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:06:57.763868  440221 retry.go:31] will retry after 2.293501644s: waiting for machine to come up
	I0408 13:07:00.060489  440198 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:00.061092  440198 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:00.061126  440198 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:00.061021  440221 retry.go:31] will retry after 3.187962529s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.796044691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581623796025574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9617737e-518e-4712-ac8c-b32e86aab692 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.796579479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89d8bb73-f55b-4e66-9bdb-491a6e0df058 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.796631602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89d8bb73-f55b-4e66-9bdb-491a6e0df058 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.796821277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580477814100319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e826602fffd409128f6a97065c7d28177660298974ea9cd3e6bfc10952e4f3d3,PodSandboxId:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712580456679534703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6b55d7cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346,PodSandboxId:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580453563241083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,},Annotations:map[string]string{io.kubernetes.container.hash: 56e8a5c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568,PodSandboxId:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712580447275067444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f
2e-b9e9fef5fb70,},Annotations:map[string]string{io.kubernetes.container.hash: 332c3d62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712580447268326827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da53
33,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f,PodSandboxId:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580441370244197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b51be61653856d6,},Annotations:map[string]string{io.kuber
netes.container.hash: 24fc493b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d,PodSandboxId:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712580441342093836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7,PodSandboxId:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712580441262115634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb,PodSandboxId:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712580441192395127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 85ce6568,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89d8bb73-f55b-4e66-9bdb-491a6e0df058 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.837611881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce64269a-825e-4d20-9d82-f27a92530678 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.837689004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce64269a-825e-4d20-9d82-f27a92530678 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.838938823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=424c87b5-124c-4d51-9580-6616dcd030bf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.839765146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581623839737206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=424c87b5-124c-4d51-9580-6616dcd030bf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.840277084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=491722c7-d29e-4fe6-9858-f3e1386d8a1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.840332505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=491722c7-d29e-4fe6-9858-f3e1386d8a1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.840692689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580477814100319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e826602fffd409128f6a97065c7d28177660298974ea9cd3e6bfc10952e4f3d3,PodSandboxId:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712580456679534703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6b55d7cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346,PodSandboxId:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580453563241083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,},Annotations:map[string]string{io.kubernetes.container.hash: 56e8a5c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568,PodSandboxId:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712580447275067444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f
2e-b9e9fef5fb70,},Annotations:map[string]string{io.kubernetes.container.hash: 332c3d62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712580447268326827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da53
33,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f,PodSandboxId:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580441370244197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b51be61653856d6,},Annotations:map[string]string{io.kuber
netes.container.hash: 24fc493b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d,PodSandboxId:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712580441342093836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7,PodSandboxId:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712580441262115634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb,PodSandboxId:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712580441192395127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 85ce6568,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=491722c7-d29e-4fe6-9858-f3e1386d8a1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.890180084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6dc02b57-e2e5-4218-9950-161eb62931d7 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.890293716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6dc02b57-e2e5-4218-9950-161eb62931d7 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.891944947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd647329-005e-442f-af8d-6b9a7f7ca2f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.892728970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581623892694642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd647329-005e-442f-af8d-6b9a7f7ca2f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.894004688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91d2c191-6358-4ccb-b724-0dcebe53cabd name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.894076067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91d2c191-6358-4ccb-b724-0dcebe53cabd name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.894357145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580477814100319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e826602fffd409128f6a97065c7d28177660298974ea9cd3e6bfc10952e4f3d3,PodSandboxId:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712580456679534703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6b55d7cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346,PodSandboxId:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580453563241083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,},Annotations:map[string]string{io.kubernetes.container.hash: 56e8a5c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568,PodSandboxId:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712580447275067444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f
2e-b9e9fef5fb70,},Annotations:map[string]string{io.kubernetes.container.hash: 332c3d62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712580447268326827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da53
33,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f,PodSandboxId:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580441370244197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b51be61653856d6,},Annotations:map[string]string{io.kuber
netes.container.hash: 24fc493b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d,PodSandboxId:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712580441342093836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7,PodSandboxId:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712580441262115634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb,PodSandboxId:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712580441192395127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 85ce6568,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91d2c191-6358-4ccb-b724-0dcebe53cabd name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.943440087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a55fd7b-f0d3-4e8e-ad7d-b3c7cbf3b98e name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.943626584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a55fd7b-f0d3-4e8e-ad7d-b3c7cbf3b98e name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.945593362Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4613ea7a-d182-4f70-aac2-953f4939a8b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.946361969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581623946328890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4613ea7a-d182-4f70-aac2-953f4939a8b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.947225879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96b542d3-1878-4699-8006-3f6c6aae7c2c name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.947295180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96b542d3-1878-4699-8006-3f6c6aae7c2c name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:03 no-preload-135234 crio[730]: time="2024-04-08 13:07:03.947855037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580477814100319,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da5333,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e826602fffd409128f6a97065c7d28177660298974ea9cd3e6bfc10952e4f3d3,PodSandboxId:94d8b3e4518d08f826fe38666580b585948433dd810d028d473f913e2fed1cf8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1712580456679534703,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e34c664b-3926-4ddf-98b9-7bb599eee6ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6b55d7cb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346,PodSandboxId:61a5b22049c08e70bdfc8c27ddcc459ccd5119e5323d23bc9258078b46aff128,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580453563241083,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ndz4x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f33b7eb7-3553-4027-ac38-f3ee62cc67d5,},Annotations:map[string]string{io.kubernetes.container.hash: 56e8a5c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568,PodSandboxId:aea6c90dcd38188697e3955b3474d9f6318517b96193faef69d32c73fde723f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1712580447275067444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tr6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e97a709-efb2-4d44-8f
2e-b9e9fef5fb70,},Annotations:map[string]string{io.kubernetes.container.hash: 332c3d62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b,PodSandboxId:1d0fa77fa4e847424a883d491742ac095d655e6bee3e3756295fc44af67661a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1712580447268326827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64374707-2bed-4656-a07a-38e950da53
33,},Annotations:map[string]string{io.kubernetes.container.hash: 9e9568f4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f,PodSandboxId:6341883da4bbb189a0f13573fcc46480a906360bda1aabd3933f7ea800a8b42b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580441370244197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2c4c6e50402f450b51be61653856d6,},Annotations:map[string]string{io.kuber
netes.container.hash: 24fc493b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d,PodSandboxId:4e5300388929c29837fc65d48cd6d903ac2ed08b66c92343f87191a9e17a58e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712580441342093836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a504075cb3a2add1c3f5ae973fcfff9,},Annotations:map[string]string{io.kubernetes.contain
er.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7,PodSandboxId:75e6c81293865226dd0c11316f8f5a699261cf6bf66cd7589cadfa347cfbf79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712580441262115634,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5336d68b869284c41908908fd176a37,},Annotations:map[string]string{io.kube
rnetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb,PodSandboxId:d2586afd37420483abaea29af9686d3c65dcadb3f304b72ced758bae0bd7d302,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712580441192395127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-135234,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8eba9bb52edf68b218a17bfc407e5c8,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 85ce6568,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96b542d3-1878-4699-8006-3f6c6aae7c2c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a6c1545f860a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   1d0fa77fa4e84       storage-provisioner
	e826602fffd40       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   94d8b3e4518d0       busybox
	eef06839046da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   61a5b22049c08       coredns-7db6d8ff4d-ndz4x
	9afab6e492932       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652                                      19 minutes ago      Running             kube-proxy                1                   aea6c90dcd381       kube-proxy-tr6td
	78ee8679f8367       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   1d0fa77fa4e84       storage-provisioner
	31df11caa819e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   6341883da4bbb       etcd-no-preload-135234
	bb1c9d0aa3889       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5                                      19 minutes ago      Running             kube-scheduler            1                   4e5300388929c       kube-scheduler-no-preload-135234
	76a18493a630c       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a                                      19 minutes ago      Running             kube-controller-manager   1                   75e6c81293865       kube-controller-manager-no-preload-135234
	380c451b3806e       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3                                      19 minutes ago      Running             kube-apiserver            1                   d2586afd37420       kube-apiserver-no-preload-135234
	
	
	==> coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43428 - 8500 "HINFO IN 7617088099657041315.5867437557060873632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009418175s
	
	
	==> describe nodes <==
	Name:               no-preload-135234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-135234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=no-preload-135234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T12_38_57_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:38:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-135234
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 13:07:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 13:03:16 +0000   Mon, 08 Apr 2024 12:38:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 13:03:16 +0000   Mon, 08 Apr 2024 12:38:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 13:03:16 +0000   Mon, 08 Apr 2024 12:38:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 13:03:16 +0000   Mon, 08 Apr 2024 12:47:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.48
	  Hostname:    no-preload-135234
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 18c48622337b460382a3ee7ec0672944
	  System UUID:                18c48622-337b-4603-82a3-ee7ec0672944
	  Boot ID:                    a3cfe8ba-a14f-4e41-9b54-35c60f5a9546
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-7db6d8ff4d-ndz4x                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-no-preload-135234                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-135234             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-135234    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-tr6td                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-no-preload-135234             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-dbb9b              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 27m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-135234 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-135234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-135234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-135234 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-135234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-135234 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-135234 status is now: NodeReady
	  Normal  RegisteredNode           27m                node-controller  Node no-preload-135234 event: Registered Node no-preload-135234 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-135234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-135234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-135234 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-135234 event: Registered Node no-preload-135234 in Controller
	
	
	==> dmesg <==
	[Apr 8 12:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052896] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041171] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.563193] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.872146] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.636415] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.920656] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.056041] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070444] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.183986] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.182803] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.317385] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[Apr 8 12:47] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.068130] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.341938] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[  +6.588358] kauditd_printk_skb: 100 callbacks suppressed
	[  +4.090456] systemd-fstab-generator[2085]: Ignoring "noauto" option for root device
	[  +2.500377] kauditd_printk_skb: 72 callbacks suppressed
	[  +7.352202] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] <==
	{"level":"info","ts":"2024-04-08T12:47:48.186338Z","caller":"traceutil/trace.go:171","msg":"trace[574962701] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"274.073911ms","start":"2024-04-08T12:47:47.912247Z","end":"2024-04-08T12:47:48.186321Z","steps":["trace[574962701] 'process raft request'  (duration: 103.363321ms)","trace[574962701] 'compare'  (duration: 170.384599ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T12:47:48.186548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.543969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-08T12:47:48.186614Z","caller":"traceutil/trace.go:171","msg":"trace[207258449] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b; range_end:; response_count:1; response_revision:600; }","duration":"272.646041ms","start":"2024-04-08T12:47:47.913959Z","end":"2024-04-08T12:47:48.186605Z","steps":["trace[207258449] 'agreement among raft nodes before linearized reading'  (duration: 272.404723ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:47:48.550963Z","caller":"traceutil/trace.go:171","msg":"trace[487914323] linearizableReadLoop","detail":"{readStateIndex:639; appliedIndex:638; }","duration":"345.277115ms","start":"2024-04-08T12:47:48.205665Z","end":"2024-04-08T12:47:48.550942Z","steps":["trace[487914323] 'read index received'  (duration: 344.102476ms)","trace[487914323] 'applied index is now lower than readState.Index'  (duration: 1.173672ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-08T12:47:48.551217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.532574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-08T12:47:48.55125Z","caller":"traceutil/trace.go:171","msg":"trace[1873912533] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b; range_end:; response_count:1; response_revision:601; }","duration":"345.611692ms","start":"2024-04-08T12:47:48.20563Z","end":"2024-04-08T12:47:48.551241Z","steps":["trace[1873912533] 'agreement among raft nodes before linearized reading'  (duration: 345.386261ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:47:48.551276Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:47:48.205613Z","time spent":"345.656653ms","remote":"127.0.0.1:52610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4259,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" "}
	{"level":"info","ts":"2024-04-08T12:47:48.551537Z","caller":"traceutil/trace.go:171","msg":"trace[1947281467] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"357.294412ms","start":"2024-04-08T12:47:48.194158Z","end":"2024-04-08T12:47:48.551452Z","steps":["trace[1947281467] 'process raft request'  (duration: 355.648721ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:47:48.551624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:47:48.194144Z","time spent":"357.423025ms","remote":"127.0.0.1:52516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":763,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1e9df\" mod_revision:563 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1e9df\" value_size:668 lease:2438293193884155420 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1e9df\" > >"}
	{"level":"info","ts":"2024-04-08T12:48:11.871542Z","caller":"traceutil/trace.go:171","msg":"trace[1636821799] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"382.321211ms","start":"2024-04-08T12:48:11.489125Z","end":"2024-04-08T12:48:11.871446Z","steps":["trace[1636821799] 'process raft request'  (duration: 382.15699ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:48:11.871817Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:48:11.48904Z","time spent":"382.645424ms","remote":"127.0.0.1:52516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":802,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" mod_revision:597 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" value_size:707 lease:2438293193884155420 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b04e00c3\" > >"}
	{"level":"info","ts":"2024-04-08T12:48:12.166762Z","caller":"traceutil/trace.go:171","msg":"trace[1668074674] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"462.45923ms","start":"2024-04-08T12:48:11.704283Z","end":"2024-04-08T12:48:12.166742Z","steps":["trace[1668074674] 'read index received'  (duration: 167.250498ms)","trace[1668074674] 'applied index is now lower than readState.Index'  (duration: 295.207869ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-08T12:48:12.167252Z","caller":"traceutil/trace.go:171","msg":"trace[763150810] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"657.422092ms","start":"2024-04-08T12:48:11.509819Z","end":"2024-04-08T12:48:12.167241Z","steps":["trace[763150810] 'process raft request'  (duration: 656.810147ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:48:12.167389Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:48:11.509801Z","time spent":"657.533808ms","remote":"127.0.0.1:52610","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4221,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" mod_revision:604 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" value_size:4155 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" > >"}
	{"level":"warn","ts":"2024-04-08T12:48:12.167679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"463.342003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" ","response":"range_response_count:1 size:4236"}
	{"level":"info","ts":"2024-04-08T12:48:12.16778Z","caller":"traceutil/trace.go:171","msg":"trace[1515833610] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b; range_end:; response_count:1; response_revision:619; }","duration":"463.500669ms","start":"2024-04-08T12:48:11.704257Z","end":"2024-04-08T12:48:12.167757Z","steps":["trace[1515833610] 'agreement among raft nodes before linearized reading'  (duration: 463.321717ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-08T12:48:12.167856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-08T12:48:11.704244Z","time spent":"463.601633ms","remote":"127.0.0.1:52610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4259,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-dbb9b\" "}
	{"level":"warn","ts":"2024-04-08T12:48:12.168024Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.452638ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1844b\" ","response":"range_response_count:1 size:940"}
	{"level":"info","ts":"2024-04-08T12:48:12.16807Z","caller":"traceutil/trace.go:171","msg":"trace[1226717369] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-dbb9b.17c44ed8b1b1844b; range_end:; response_count:1; response_revision:619; }","duration":"292.519135ms","start":"2024-04-08T12:48:11.875543Z","end":"2024-04-08T12:48:12.168062Z","steps":["trace[1226717369] 'agreement among raft nodes before linearized reading'  (duration: 292.40909ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T12:57:23.88779Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":837}
	{"level":"info","ts":"2024-04-08T12:57:23.900188Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":837,"took":"11.93959ms","hash":589023636,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2637824,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-08T12:57:23.90027Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":589023636,"revision":837,"compact-revision":-1}
	{"level":"info","ts":"2024-04-08T13:02:23.89569Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1080}
	{"level":"info","ts":"2024-04-08T13:02:23.900143Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1080,"took":"4.06948ms","hash":454178957,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-08T13:02:23.900203Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":454178957,"revision":1080,"compact-revision":837}
	
	
	==> kernel <==
	 13:07:04 up 20 min,  0 users,  load average: 0.11, 0.11, 0.09
	Linux no-preload-135234 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] <==
	I0408 13:00:26.489515       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:02:25.488818       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:02:25.488963       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0408 13:02:26.490198       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:02:26.490276       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:02:26.490293       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:02:26.490208       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:02:26.490413       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:02:26.491699       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:03:26.490816       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:03:26.490879       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:03:26.490888       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:03:26.492087       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:03:26.492133       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:03:26.492140       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:05:26.492061       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:05:26.492411       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:05:26.492441       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:05:26.492643       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:05:26.492864       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:05:26.494290       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] <==
	I0408 13:01:11.494905       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:01:40.952078       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:01:41.501996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:02:10.958283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:02:11.511800       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:02:40.964448       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:02:41.519973       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:03:10.970449       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:03:11.528274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 13:03:38.507584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.669645ms"
	E0408 13:03:40.975806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:03:41.538395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 13:03:53.502949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="134.986µs"
	E0408 13:04:10.981935       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:04:11.550899       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:04:40.988909       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:04:41.560261       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:05:10.994662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:05:11.568997       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:05:41.001147       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:05:41.578754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:06:11.008437       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:06:11.587867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:06:41.014527       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:06:41.601078       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] <==
	I0408 12:47:27.737118       1 server_linux.go:69] "Using iptables proxy"
	I0408 12:47:27.758016       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.48"]
	I0408 12:47:27.838606       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0408 12:47:27.838774       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:47:27.838886       1 server_linux.go:165] "Using iptables Proxier"
	I0408 12:47:27.848291       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:47:27.849536       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0408 12:47:27.849696       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:47:27.853241       1 config.go:319] "Starting node config controller"
	I0408 12:47:27.853378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0408 12:47:27.861375       1 config.go:192] "Starting service config controller"
	I0408 12:47:27.861421       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0408 12:47:27.861452       1 config.go:101] "Starting endpoint slice config controller"
	I0408 12:47:27.861456       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0408 12:47:27.954441       1 shared_informer.go:320] Caches are synced for node config
	I0408 12:47:27.961649       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0408 12:47:27.961718       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] <==
	I0408 12:47:23.277776       1 serving.go:380] Generated self-signed cert in-memory
	W0408 12:47:25.375369       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0408 12:47:25.375597       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:47:25.375729       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0408 12:47:25.375871       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0408 12:47:25.460589       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.0"
	I0408 12:47:25.461069       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:47:25.464099       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0408 12:47:25.464451       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0408 12:47:25.465727       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 12:47:25.465837       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0408 12:47:25.566313       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 13:04:20 no-preload-135234 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:04:20 no-preload-135234 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:04:20 no-preload-135234 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:04:32 no-preload-135234 kubelet[1360]: E0408 13:04:32.482988    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:04:47 no-preload-135234 kubelet[1360]: E0408 13:04:47.481920    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:05:02 no-preload-135234 kubelet[1360]: E0408 13:05:02.484849    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:05:17 no-preload-135234 kubelet[1360]: E0408 13:05:17.481976    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:05:20 no-preload-135234 kubelet[1360]: E0408 13:05:20.500728    1360 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 08 13:05:20 no-preload-135234 kubelet[1360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:05:20 no-preload-135234 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:05:20 no-preload-135234 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:05:20 no-preload-135234 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:05:30 no-preload-135234 kubelet[1360]: E0408 13:05:30.482288    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:05:44 no-preload-135234 kubelet[1360]: E0408 13:05:44.482387    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:05:56 no-preload-135234 kubelet[1360]: E0408 13:05:56.482170    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:06:10 no-preload-135234 kubelet[1360]: E0408 13:06:10.482280    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:06:20 no-preload-135234 kubelet[1360]: E0408 13:06:20.504805    1360 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 08 13:06:20 no-preload-135234 kubelet[1360]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:06:20 no-preload-135234 kubelet[1360]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:06:20 no-preload-135234 kubelet[1360]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:06:20 no-preload-135234 kubelet[1360]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:06:22 no-preload-135234 kubelet[1360]: E0408 13:06:22.481413    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:06:36 no-preload-135234 kubelet[1360]: E0408 13:06:36.483370    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:06:47 no-preload-135234 kubelet[1360]: E0408 13:06:47.482172    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	Apr 08 13:06:58 no-preload-135234 kubelet[1360]: E0408 13:06:58.482189    1360 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-dbb9b" podUID="f435d865-85f3-4d32-bedf-c3bf053500fe"
	
	
	==> storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] <==
	I0408 12:47:27.705107       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0408 12:47:57.708981       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] <==
	I0408 12:47:57.914155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 12:47:57.928204       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 12:47:57.928286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 12:48:15.342323       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 12:48:15.342667       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-135234_4aceb192-f162-4ff9-bd07-33e095ec5491!
	I0408 12:48:15.344435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e66177d-ed01-459b-b2fc-842fa98cd685", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-135234_4aceb192-f162-4ff9-bd07-33e095ec5491 became leader
	I0408 12:48:15.465679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-135234_4aceb192-f162-4ff9-bd07-33e095ec5491!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-135234 -n no-preload-135234
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-135234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-dbb9b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-135234 describe pod metrics-server-569cc877fc-dbb9b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-135234 describe pod metrics-server-569cc877fc-dbb9b: exit status 1 (68.551511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-dbb9b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-135234 describe pod metrics-server-569cc877fc-dbb9b: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (366.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (360.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-488947 -n embed-certs-488947
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-08 13:07:51.762014365 +0000 UTC m=+6449.548696569
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-488947 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-488947 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.882µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-488947 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-488947 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-488947 logs -n 25: (1.478296702s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-122490 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | disable-driver-mounts-122490                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:06 UTC | 08 Apr 24 13:06 UTC |
	| start   | -p newest-cni-337169 --memory=2200 --alsologtostderr   | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:06 UTC | 08 Apr 24 13:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| delete  | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	| addons  | enable metrics-server -p newest-cni-337169             | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-337169                                   | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-337169                  | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-337169 --memory=2200 --alsologtostderr   | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 13:07:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 13:07:51.830001  440985 out.go:291] Setting OutFile to fd 1 ...
	I0408 13:07:51.830155  440985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 13:07:51.830171  440985 out.go:304] Setting ErrFile to fd 2...
	I0408 13:07:51.830180  440985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 13:07:51.830369  440985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 13:07:51.831009  440985 out.go:298] Setting JSON to false
	I0408 13:07:51.831972  440985 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10215,"bootTime":1712571457,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 13:07:51.832047  440985 start.go:139] virtualization: kvm guest
	I0408 13:07:51.834904  440985 out.go:177] * [newest-cni-337169] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 13:07:51.837427  440985 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 13:07:51.837421  440985 notify.go:220] Checking for updates...
	I0408 13:07:51.839240  440985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 13:07:51.840775  440985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 13:07:51.842366  440985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 13:07:51.844120  440985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 13:07:51.846004  440985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 13:07:51.848707  440985 config.go:182] Loaded profile config "newest-cni-337169": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 13:07:51.849181  440985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 13:07:51.849233  440985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 13:07:51.867044  440985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41955
	I0408 13:07:51.867566  440985 main.go:141] libmachine: () Calling .GetVersion
	I0408 13:07:51.868207  440985 main.go:141] libmachine: Using API Version  1
	I0408 13:07:51.868229  440985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 13:07:51.868666  440985 main.go:141] libmachine: () Calling .GetMachineName
	I0408 13:07:51.868886  440985 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	I0408 13:07:51.869212  440985 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 13:07:51.869641  440985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 13:07:51.869685  440985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 13:07:51.885764  440985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0408 13:07:51.886340  440985 main.go:141] libmachine: () Calling .GetVersion
	I0408 13:07:51.886941  440985 main.go:141] libmachine: Using API Version  1
	I0408 13:07:51.886978  440985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 13:07:51.887370  440985 main.go:141] libmachine: () Calling .GetMachineName
	I0408 13:07:51.887598  440985 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	I0408 13:07:51.930616  440985 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 13:07:51.932032  440985 start.go:297] selected driver: kvm2
	I0408 13:07:51.932068  440985 start.go:901] validating driver "kvm2" against &{Name:newest-cni-337169 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.0 ClusterName:newest-cni-337169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 13:07:51.932218  440985 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 13:07:51.932934  440985 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 13:07:51.933026  440985 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 13:07:51.949790  440985 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 13:07:51.950254  440985 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 13:07:51.950341  440985 cni.go:84] Creating CNI manager for ""
	I0408 13:07:51.950359  440985 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 13:07:51.950410  440985 start.go:340] cluster config:
	{Name:newest-cni-337169 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-337169 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 13:07:51.950592  440985 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 13:07:51.952864  440985 out.go:177] * Starting "newest-cni-337169" primary control-plane node in "newest-cni-337169" cluster
	I0408 13:07:51.954660  440985 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 13:07:51.954714  440985 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0408 13:07:51.954723  440985 cache.go:56] Caching tarball of preloaded images
	I0408 13:07:51.954853  440985 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 13:07:51.954870  440985 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0408 13:07:51.954982  440985 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/newest-cni-337169/config.json ...
	I0408 13:07:51.955187  440985 start.go:360] acquireMachinesLock for newest-cni-337169: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 13:07:51.955233  440985 start.go:364] duration metric: took 24.445µs to acquireMachinesLock for "newest-cni-337169"
	I0408 13:07:51.955247  440985 start.go:96] Skipping create...Using existing machine configuration
	I0408 13:07:51.955255  440985 fix.go:54] fixHost starting: 
	I0408 13:07:51.955506  440985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 13:07:51.955544  440985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 13:07:51.971792  440985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41377
	I0408 13:07:51.972292  440985 main.go:141] libmachine: () Calling .GetVersion
	I0408 13:07:51.972810  440985 main.go:141] libmachine: Using API Version  1
	I0408 13:07:51.972825  440985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 13:07:51.973198  440985 main.go:141] libmachine: () Calling .GetMachineName
	I0408 13:07:51.973461  440985 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	I0408 13:07:51.973710  440985 main.go:141] libmachine: (newest-cni-337169) Calling .GetState
	I0408 13:07:51.976080  440985 fix.go:112] recreateIfNeeded on newest-cni-337169: state=Stopped err=<nil>
	I0408 13:07:51.976119  440985 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	W0408 13:07:51.976565  440985 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 13:07:51.979841  440985 out.go:177] * Restarting existing kvm2 VM for "newest-cni-337169" ...
	
	
	==> CRI-O <==
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.498984675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581672498959545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=955d5704-8a0b-41f5-82e9-fe5bd287edfb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.500042542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be4081e4-2db5-4777-9b61-8b4afb89d1e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.500102687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be4081e4-2db5-4777-9b61-8b4afb89d1e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.500913783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0,PodSandboxId:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580766872721378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{io.kubernetes.container.hash: da5bb5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3,PodSandboxId:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765526826057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,},Annotations:map[string]string{io.kubernetes.container.hash: 496baa91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68,PodSandboxId:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765403578684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
6d8a54f-673e-495d-a0f7-fb03ff7b447b,},Annotations:map[string]string{io.kubernetes.container.hash: f3a90b0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c,PodSandboxId:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1712580765123796808,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,},Annotations:map[string]string{io.kubernetes.container.hash: 7842f57d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893,PodSandboxId:af7efdb70021a9cc5369014b9f7f1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580744517071198,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9,PodSandboxId:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580744512776628,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75,PodSandboxId:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580744440226096,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,},Annotations:map[string]string{io.kubernetes.container.hash: db6b0384,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466,PodSandboxId:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580744397796719,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,},Annotations:map[string]string{io.kubernetes.container.hash: 39fb96b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be4081e4-2db5-4777-9b61-8b4afb89d1e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.551089439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dda53a3f-d391-41e4-b7df-6dd57a85a8e6 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.551224830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dda53a3f-d391-41e4-b7df-6dd57a85a8e6 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.552661391Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80f33da3-61c4-44aa-83aa-d441150909e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.553208382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581672553133783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80f33da3-61c4-44aa-83aa-d441150909e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.553891834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f204f68-1fcb-4148-bf0f-91393ddbc892 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.553949982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f204f68-1fcb-4148-bf0f-91393ddbc892 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.554378498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0,PodSandboxId:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580766872721378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{io.kubernetes.container.hash: da5bb5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3,PodSandboxId:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765526826057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,},Annotations:map[string]string{io.kubernetes.container.hash: 496baa91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68,PodSandboxId:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765403578684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
6d8a54f-673e-495d-a0f7-fb03ff7b447b,},Annotations:map[string]string{io.kubernetes.container.hash: f3a90b0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c,PodSandboxId:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1712580765123796808,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,},Annotations:map[string]string{io.kubernetes.container.hash: 7842f57d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893,PodSandboxId:af7efdb70021a9cc5369014b9f7f1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580744517071198,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9,PodSandboxId:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580744512776628,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75,PodSandboxId:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580744440226096,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,},Annotations:map[string]string{io.kubernetes.container.hash: db6b0384,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466,PodSandboxId:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580744397796719,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,},Annotations:map[string]string{io.kubernetes.container.hash: 39fb96b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f204f68-1fcb-4148-bf0f-91393ddbc892 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.610757635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01583a07-9dee-407b-a50e-7c4c0dd59e41 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.610863196Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01583a07-9dee-407b-a50e-7c4c0dd59e41 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.612835974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3c04cd1-8ec7-43ea-b4d5-1e503b729ad8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.613484778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581672613454766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3c04cd1-8ec7-43ea-b4d5-1e503b729ad8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.614358517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58db881c-a1e2-45ac-b22c-f8b28c258c60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.614413995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58db881c-a1e2-45ac-b22c-f8b28c258c60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.614883175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0,PodSandboxId:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580766872721378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{io.kubernetes.container.hash: da5bb5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3,PodSandboxId:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765526826057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,},Annotations:map[string]string{io.kubernetes.container.hash: 496baa91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68,PodSandboxId:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765403578684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
6d8a54f-673e-495d-a0f7-fb03ff7b447b,},Annotations:map[string]string{io.kubernetes.container.hash: f3a90b0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c,PodSandboxId:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1712580765123796808,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,},Annotations:map[string]string{io.kubernetes.container.hash: 7842f57d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893,PodSandboxId:af7efdb70021a9cc5369014b9f7f1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580744517071198,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9,PodSandboxId:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580744512776628,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75,PodSandboxId:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580744440226096,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,},Annotations:map[string]string{io.kubernetes.container.hash: db6b0384,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466,PodSandboxId:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580744397796719,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,},Annotations:map[string]string{io.kubernetes.container.hash: 39fb96b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58db881c-a1e2-45ac-b22c-f8b28c258c60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.662601612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=195db79a-8904-4ef8-bb5b-cdc558db3c19 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.662683797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=195db79a-8904-4ef8-bb5b-cdc558db3c19 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.664519140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d7e0d3c-3a72-4437-a1d4-5273edc1ae98 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.664946456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581672664912493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d7e0d3c-3a72-4437-a1d4-5273edc1ae98 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.665661441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29c5465f-3092-4c73-92fc-670e931757ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.665719360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29c5465f-3092-4c73-92fc-670e931757ee name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:07:52 embed-certs-488947 crio[717]: time="2024-04-08 13:07:52.666043316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0,PodSandboxId:b15e4d07304b4ab903c8e8046c5334786d183a051741b610422659fe3ade41ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580766872721378,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ae5294d-2336-46b7-b2e8-25d6664d2c62,},Annotations:map[string]string{io.kubernetes.container.hash: da5bb5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3,PodSandboxId:07173ed88181af2921ca044b685f162dd8bd66a416411f93d3b84d60df4fb128,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765526826057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-r5rxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8b96604-1b62-462c-94b9-91d009b7f20e,},Annotations:map[string]string{io.kubernetes.container.hash: 496baa91,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68,PodSandboxId:225aaee73a5128e7316d30ec259fca4f6c3763225d6b532d5ac85103cb38785d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580765403578684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gdp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
6d8a54f-673e-495d-a0f7-fb03ff7b447b,},Annotations:map[string]string{io.kubernetes.container.hash: f3a90b0c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c,PodSandboxId:c4f108b541515a478a4073062cf836755266e7542de935587a21c7d11537e66e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt
:1712580765123796808,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mqrtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1035043f-eea0-4b45-a2df-18d477a54ae9,},Annotations:map[string]string{io.kubernetes.container.hash: 7842f57d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893,PodSandboxId:af7efdb70021a9cc5369014b9f7f1e31d6858e9b4fa46e651e086a669a1f9d47,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580744517071198,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10a373ca7d5307749e7ac8e52c7d9187,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9,PodSandboxId:1b3dadf9339e0633362ace13438a71aa62df63ec3bb1cfa3c633843bd2f9ec3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580744512776628,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b31daee4d6a3afaf7bb8490632992b25,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75,PodSandboxId:6f258073dacfbac185c0771e1fe3611c59b45cc420aa58ee7713db8e034d1da8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580744440226096,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98210bf73b7884f23baa7499ebf47a51,},Annotations:map[string]string{io.kubernetes.container.hash: db6b0384,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466,PodSandboxId:7accfcdd7ffc68e874bbe46f61ef7397fc4b89e061dac0f470f6dfd3e11c7941,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580744397796719,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-488947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ec65011bfa809c442830b77b914e27,},Annotations:map[string]string{io.kubernetes.container.hash: 39fb96b1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29c5465f-3092-4c73-92fc-670e931757ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	152b429d9251c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   b15e4d07304b4       storage-provisioner
	b6e7739783a4f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   07173ed88181a       coredns-76f75df574-r5rxq
	a108c90e411d1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   225aaee73a512       coredns-76f75df574-4gdp4
	4bc85eb1a4d2d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   15 minutes ago      Running             kube-proxy                0                   c4f108b541515       kube-proxy-mqrtp
	7a1d769685f62       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   15 minutes ago      Running             kube-scheduler            2                   af7efdb70021a       kube-scheduler-embed-certs-488947
	5e6f9ed437945       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   15 minutes ago      Running             kube-controller-manager   2                   1b3dadf9339e0       kube-controller-manager-embed-certs-488947
	3e8346ea478a6       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   15 minutes ago      Running             kube-apiserver            2                   6f258073dacfb       kube-apiserver-embed-certs-488947
	7b845af8e0eaa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   7accfcdd7ffc6       etcd-embed-certs-488947
	
	
	==> coredns [a108c90e411d17597b188b89f7150f598fd909618e94c4edb06436c70e882b68] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b6e7739783a4fdd53944ad28922f341ef71563567d030a7f9dc3a83c9efc52d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-488947
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-488947
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=embed-certs-488947
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T12_52_31_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:52:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-488947
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 13:07:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 13:03:01 +0000   Mon, 08 Apr 2024 12:52:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 13:03:01 +0000   Mon, 08 Apr 2024 12:52:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 13:03:01 +0000   Mon, 08 Apr 2024 12:52:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 13:03:01 +0000   Mon, 08 Apr 2024 12:52:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.159
	  Hostname:    embed-certs-488947
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 99e3652f67bd4ae8b7c4adf9bc2dc24b
	  System UUID:                99e3652f-67bd-4ae8-b7c4-adf9bc2dc24b
	  Boot ID:                    d547bf90-e1f6-45ad-9f8e-66de3ca49156
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-4gdp4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-76f75df574-r5rxq                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-488947                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-488947             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-488947    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-mqrtp                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-488947             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-87ddx               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-488947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-488947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-488947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-488947 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-488947 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-488947 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-488947 event: Registered Node embed-certs-488947 in Controller
	
	
	==> dmesg <==
	[  +0.052977] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041568] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.650129] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.986381] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.661754] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.402355] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.067673] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068964] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.226490] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.142214] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.365713] systemd-fstab-generator[701]: Ignoring "noauto" option for root device
	[  +5.207459] systemd-fstab-generator[798]: Ignoring "noauto" option for root device
	[  +0.065528] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.526611] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.624343] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.339915] kauditd_printk_skb: 79 callbacks suppressed
	[Apr 8 12:52] kauditd_printk_skb: 6 callbacks suppressed
	[  +2.000954] systemd-fstab-generator[3569]: Ignoring "noauto" option for root device
	[  +6.786799] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.069831] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[ +13.789054] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.230625] systemd-fstab-generator[4209]: Ignoring "noauto" option for root device
	[Apr 8 12:53] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [7b845af8e0eaafa8d1d0272965976b72d49fa2d253688de210980416b0748466] <==
	{"level":"info","ts":"2024-04-08T12:52:25.094239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-08T12:52:25.094296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-08T12:52:25.094332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 received MsgPreVoteResp from d718283c8ba9c288 at term 1"}
	{"level":"info","ts":"2024-04-08T12:52:25.094344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became candidate at term 2"}
	{"level":"info","ts":"2024-04-08T12:52:25.09435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 received MsgVoteResp from d718283c8ba9c288 at term 2"}
	{"level":"info","ts":"2024-04-08T12:52:25.094358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d718283c8ba9c288 became leader at term 2"}
	{"level":"info","ts":"2024-04-08T12:52:25.094368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d718283c8ba9c288 elected leader d718283c8ba9c288 at term 2"}
	{"level":"info","ts":"2024-04-08T12:52:25.098653Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:52:25.103597Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d718283c8ba9c288","local-member-attributes":"{Name:embed-certs-488947 ClientURLs:[https://192.168.72.159:2379]}","request-path":"/0/members/d718283c8ba9c288/attributes","cluster-id":"6f0e35e647fe17a2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T12:52:25.103833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:52:25.104412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:52:25.110977Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T12:52:25.10645Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0e35e647fe17a2","local-member-id":"d718283c8ba9c288","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:52:25.111217Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:52:25.111266Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:52:25.118969Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.159:2379"}
	{"level":"info","ts":"2024-04-08T12:52:25.119431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:52:25.119474Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T13:02:25.525539Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":712}
	{"level":"info","ts":"2024-04-08T13:02:25.537869Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":712,"took":"11.962504ms","hash":3674875514,"current-db-size-bytes":2310144,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2310144,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-08T13:02:25.537957Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3674875514,"revision":712,"compact-revision":-1}
	{"level":"info","ts":"2024-04-08T13:07:18.98264Z","caller":"traceutil/trace.go:171","msg":"trace[133135818] transaction","detail":"{read_only:false; response_revision:1193; number_of_response:1; }","duration":"149.508023ms","start":"2024-04-08T13:07:18.833084Z","end":"2024-04-08T13:07:18.982592Z","steps":["trace[133135818] 'process raft request'  (duration: 149.334581ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-08T13:07:25.538265Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2024-04-08T13:07:25.543386Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":955,"took":"4.639973ms","hash":3489296865,"current-db-size-bytes":2310144,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-08T13:07:25.543555Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3489296865,"revision":955,"compact-revision":712}
	
	
	==> kernel <==
	 13:07:53 up 20 min,  0 users,  load average: 0.00, 0.11, 0.11
	Linux embed-certs-488947 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e8346ea478a60c842851bcf7440a10638d7d9f6c0627532680d0b223ada6a75] <==
	I0408 13:02:28.485647       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:03:28.484826       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:03:28.485045       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:03:28.485079       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:03:28.485980       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:03:28.486045       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:03:28.486089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:05:28.485720       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:05:28.485810       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:05:28.485820       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:05:28.486926       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:05:28.487023       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:05:28.487031       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:07:27.489405       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:07:27.489532       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0408 13:07:28.490606       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:07:28.490826       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:07:28.490859       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:07:28.490606       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:07:28.490974       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:07:28.492258       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5e6f9ed437945e3254164566e5541876da6da0dfa50406ecfdb0c354503c2df9] <==
	I0408 13:02:13.923254       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:02:43.458141       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:02:43.933320       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:03:13.465078       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:03:13.944057       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 13:03:39.287671       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="231.693µs"
	E0408 13:03:43.471095       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:03:43.953967       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 13:03:53.287054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.038891ms"
	E0408 13:04:13.477306       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:04:13.963619       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:04:43.483381       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:04:43.973305       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:05:13.488873       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:05:13.982558       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:05:43.494766       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:05:43.991957       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:06:13.500261       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:06:14.003449       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:06:43.507865       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:06:44.021889       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:07:13.516809       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:07:14.033753       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:07:43.525233       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:07:44.046535       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4bc85eb1a4d2d77c9508683569c44e363576b0fae7eec34373bebe0746e7f13c] <==
	I0408 12:52:45.555780       1 server_others.go:72] "Using iptables proxy"
	I0408 12:52:45.579584       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.159"]
	I0408 12:52:45.673537       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 12:52:45.673562       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:52:45.673587       1 server_others.go:168] "Using iptables Proxier"
	I0408 12:52:45.720468       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:52:45.779386       1 server.go:865] "Version info" version="v1.29.3"
	I0408 12:52:45.779446       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:52:45.791604       1 config.go:188] "Starting service config controller"
	I0408 12:52:45.791717       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 12:52:45.791917       1 config.go:97] "Starting endpoint slice config controller"
	I0408 12:52:45.791995       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 12:52:45.798672       1 config.go:315] "Starting node config controller"
	I0408 12:52:45.798705       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 12:52:45.993912       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0408 12:52:45.993997       1 shared_informer.go:318] Caches are synced for service config
	I0408 12:52:46.000015       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7a1d769685f623b40a969e9b6106d9b85c2781c05f2188c40432726b27114893] <==
	W0408 12:52:28.527378       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 12:52:28.527433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 12:52:28.556348       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0408 12:52:28.556376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0408 12:52:28.597819       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 12:52:28.597985       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:52:28.687729       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 12:52:28.687878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 12:52:28.718468       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 12:52:28.719683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0408 12:52:28.745520       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 12:52:28.745717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 12:52:28.749525       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 12:52:28.749592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 12:52:28.769031       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 12:52:28.769128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 12:52:28.816344       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 12:52:28.816433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 12:52:28.817981       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 12:52:28.818015       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 12:52:28.824272       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 12:52:28.824330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0408 12:52:28.856379       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 12:52:28.856445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0408 12:52:30.741188       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 13:05:31 embed-certs-488947 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:05:31 embed-certs-488947 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:05:31 embed-certs-488947 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:05:31 embed-certs-488947 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:05:39 embed-certs-488947 kubelet[3900]: E0408 13:05:39.269341    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:05:51 embed-certs-488947 kubelet[3900]: E0408 13:05:51.270254    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:06:03 embed-certs-488947 kubelet[3900]: E0408 13:06:03.270460    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:06:18 embed-certs-488947 kubelet[3900]: E0408 13:06:18.269870    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:06:31 embed-certs-488947 kubelet[3900]: E0408 13:06:31.269981    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:06:31 embed-certs-488947 kubelet[3900]: E0408 13:06:31.350121    3900 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:06:31 embed-certs-488947 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:06:31 embed-certs-488947 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:06:31 embed-certs-488947 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:06:31 embed-certs-488947 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:06:45 embed-certs-488947 kubelet[3900]: E0408 13:06:45.269096    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:06:56 embed-certs-488947 kubelet[3900]: E0408 13:06:56.270185    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:07:10 embed-certs-488947 kubelet[3900]: E0408 13:07:10.269218    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:07:21 embed-certs-488947 kubelet[3900]: E0408 13:07:21.269061    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:07:31 embed-certs-488947 kubelet[3900]: E0408 13:07:31.349356    3900 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:07:31 embed-certs-488947 kubelet[3900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:07:31 embed-certs-488947 kubelet[3900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:07:31 embed-certs-488947 kubelet[3900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:07:31 embed-certs-488947 kubelet[3900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:07:32 embed-certs-488947 kubelet[3900]: E0408 13:07:32.268241    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	Apr 08 13:07:46 embed-certs-488947 kubelet[3900]: E0408 13:07:46.268078    3900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-87ddx" podUID="9e6f83bf-7954-4003-b66a-e62d52985947"
	
	
	==> storage-provisioner [152b429d9251c9c81e9dc32ac274ed0083bd4bd02d6861db91b2e7e0298a53c0] <==
	I0408 12:52:46.971944       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 12:52:46.983842       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 12:52:46.983934       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 12:52:46.995635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 12:52:46.995836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-488947_f0ff17a7-d3f7-445a-8c1f-feb7aaa1e27e!
	I0408 12:52:46.997085       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69cb632f-180b-4beb-a8e5-6535117668c8", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-488947_f0ff17a7-d3f7-445a-8c1f-feb7aaa1e27e became leader
	I0408 12:52:47.096417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-488947_f0ff17a7-d3f7-445a-8c1f-feb7aaa1e27e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-488947 -n embed-certs-488947
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-488947 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-87ddx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-488947 describe pod metrics-server-57f55c9bc5-87ddx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-488947 describe pod metrics-server-57f55c9bc5-87ddx: exit status 1 (81.59026ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-87ddx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-488947 describe pod metrics-server-57f55c9bc5-87ddx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (360.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (342.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-08 13:08:09.457865323 +0000 UTC m=+6467.244547526
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-527454 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-527454 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.553µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-527454 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-527454 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-527454 logs -n 25: (1.29173036s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:06 UTC | 08 Apr 24 13:06 UTC |
	| start   | -p newest-cni-337169 --memory=2200 --alsologtostderr   | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:06 UTC | 08 Apr 24 13:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| delete  | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	| addons  | enable metrics-server -p newest-cni-337169             | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-337169                                   | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-337169                  | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-337169 --memory=2200 --alsologtostderr   | newest-cni-337169            | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| delete  | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 13:07 UTC | 08 Apr 24 13:07 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 13:07:51
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 13:07:51.830001  440985 out.go:291] Setting OutFile to fd 1 ...
	I0408 13:07:51.830155  440985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 13:07:51.830171  440985 out.go:304] Setting ErrFile to fd 2...
	I0408 13:07:51.830180  440985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 13:07:51.830369  440985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 13:07:51.831009  440985 out.go:298] Setting JSON to false
	I0408 13:07:51.831972  440985 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":10215,"bootTime":1712571457,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 13:07:51.832047  440985 start.go:139] virtualization: kvm guest
	I0408 13:07:51.834904  440985 out.go:177] * [newest-cni-337169] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 13:07:51.837427  440985 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 13:07:51.837421  440985 notify.go:220] Checking for updates...
	I0408 13:07:51.839240  440985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 13:07:51.840775  440985 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 13:07:51.842366  440985 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 13:07:51.844120  440985 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 13:07:51.846004  440985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 13:07:51.848707  440985 config.go:182] Loaded profile config "newest-cni-337169": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 13:07:51.849181  440985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 13:07:51.849233  440985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 13:07:51.867044  440985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41955
	I0408 13:07:51.867566  440985 main.go:141] libmachine: () Calling .GetVersion
	I0408 13:07:51.868207  440985 main.go:141] libmachine: Using API Version  1
	I0408 13:07:51.868229  440985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 13:07:51.868666  440985 main.go:141] libmachine: () Calling .GetMachineName
	I0408 13:07:51.868886  440985 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	I0408 13:07:51.869212  440985 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 13:07:51.869641  440985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 13:07:51.869685  440985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 13:07:51.885764  440985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0408 13:07:51.886340  440985 main.go:141] libmachine: () Calling .GetVersion
	I0408 13:07:51.886941  440985 main.go:141] libmachine: Using API Version  1
	I0408 13:07:51.886978  440985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 13:07:51.887370  440985 main.go:141] libmachine: () Calling .GetMachineName
	I0408 13:07:51.887598  440985 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	I0408 13:07:51.930616  440985 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 13:07:51.932032  440985 start.go:297] selected driver: kvm2
	I0408 13:07:51.932068  440985 start.go:901] validating driver "kvm2" against &{Name:newest-cni-337169 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.0 ClusterName:newest-cni-337169 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 13:07:51.932218  440985 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 13:07:51.932934  440985 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 13:07:51.933026  440985 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 13:07:51.949790  440985 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 13:07:51.950254  440985 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0408 13:07:51.950341  440985 cni.go:84] Creating CNI manager for ""
	I0408 13:07:51.950359  440985 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 13:07:51.950410  440985 start.go:340] cluster config:
	{Name:newest-cni-337169 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-337169 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 13:07:51.950592  440985 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 13:07:51.952864  440985 out.go:177] * Starting "newest-cni-337169" primary control-plane node in "newest-cni-337169" cluster
	I0408 13:07:51.954660  440985 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 13:07:51.954714  440985 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0408 13:07:51.954723  440985 cache.go:56] Caching tarball of preloaded images
	I0408 13:07:51.954853  440985 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 13:07:51.954870  440985 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0408 13:07:51.954982  440985 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/newest-cni-337169/config.json ...
	I0408 13:07:51.955187  440985 start.go:360] acquireMachinesLock for newest-cni-337169: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 13:07:51.955233  440985 start.go:364] duration metric: took 24.445µs to acquireMachinesLock for "newest-cni-337169"
	I0408 13:07:51.955247  440985 start.go:96] Skipping create...Using existing machine configuration
	I0408 13:07:51.955255  440985 fix.go:54] fixHost starting: 
	I0408 13:07:51.955506  440985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 13:07:51.955544  440985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 13:07:51.971792  440985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41377
	I0408 13:07:51.972292  440985 main.go:141] libmachine: () Calling .GetVersion
	I0408 13:07:51.972810  440985 main.go:141] libmachine: Using API Version  1
	I0408 13:07:51.972825  440985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 13:07:51.973198  440985 main.go:141] libmachine: () Calling .GetMachineName
	I0408 13:07:51.973461  440985 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	I0408 13:07:51.973710  440985 main.go:141] libmachine: (newest-cni-337169) Calling .GetState
	I0408 13:07:51.976080  440985 fix.go:112] recreateIfNeeded on newest-cni-337169: state=Stopped err=<nil>
	I0408 13:07:51.976119  440985 main.go:141] libmachine: (newest-cni-337169) Calling .DriverName
	W0408 13:07:51.976565  440985 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 13:07:51.979841  440985 out.go:177] * Restarting existing kvm2 VM for "newest-cni-337169" ...
	I0408 13:07:51.981663  440985 main.go:141] libmachine: (newest-cni-337169) Calling .Start
	I0408 13:07:51.982010  440985 main.go:141] libmachine: (newest-cni-337169) Ensuring networks are active...
	I0408 13:07:51.983162  440985 main.go:141] libmachine: (newest-cni-337169) Ensuring network default is active
	I0408 13:07:51.983618  440985 main.go:141] libmachine: (newest-cni-337169) Ensuring network mk-newest-cni-337169 is active
	I0408 13:07:51.984286  440985 main.go:141] libmachine: (newest-cni-337169) Getting domain xml...
	I0408 13:07:51.985356  440985 main.go:141] libmachine: (newest-cni-337169) Creating domain...
	I0408 13:07:53.319358  440985 main.go:141] libmachine: (newest-cni-337169) Waiting to get IP...
	I0408 13:07:53.320306  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:53.320789  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:53.320900  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:53.320748  441060 retry.go:31] will retry after 236.286592ms: waiting for machine to come up
	I0408 13:07:53.558587  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:53.559264  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:53.559298  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:53.559215  441060 retry.go:31] will retry after 304.335874ms: waiting for machine to come up
	I0408 13:07:53.865672  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:53.866196  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:53.866223  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:53.866153  441060 retry.go:31] will retry after 411.417643ms: waiting for machine to come up
	I0408 13:07:54.279857  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:54.280416  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:54.280451  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:54.280364  441060 retry.go:31] will retry after 601.964637ms: waiting for machine to come up
	I0408 13:07:54.884366  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:54.884916  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:54.884952  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:54.884845  441060 retry.go:31] will retry after 492.313884ms: waiting for machine to come up
	I0408 13:07:55.379283  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:55.379906  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:55.379941  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:55.379846  441060 retry.go:31] will retry after 676.449417ms: waiting for machine to come up
	I0408 13:07:56.057815  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:56.058370  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:56.058402  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:56.058285  441060 retry.go:31] will retry after 978.157793ms: waiting for machine to come up
	I0408 13:07:57.038699  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:57.039128  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:57.039188  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:57.039073  441060 retry.go:31] will retry after 994.391841ms: waiting for machine to come up
	I0408 13:07:58.035222  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:58.035760  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:58.035794  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:58.035674  441060 retry.go:31] will retry after 1.180349957s: waiting for machine to come up
	I0408 13:07:59.218123  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:07:59.218693  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:07:59.218724  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:07:59.218629  441060 retry.go:31] will retry after 1.778818881s: waiting for machine to come up
	I0408 13:08:00.999776  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:08:01.000394  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:08:01.000429  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:08:01.000324  441060 retry.go:31] will retry after 2.650502357s: waiting for machine to come up
	I0408 13:08:03.652959  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:08:03.653493  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:08:03.653529  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:08:03.653407  441060 retry.go:31] will retry after 2.458737359s: waiting for machine to come up
	I0408 13:08:06.114992  440985 main.go:141] libmachine: (newest-cni-337169) DBG | domain newest-cni-337169 has defined MAC address 52:54:00:d3:f8:6b in network mk-newest-cni-337169
	I0408 13:08:06.115385  440985 main.go:141] libmachine: (newest-cni-337169) DBG | unable to find current IP address of domain newest-cni-337169 in network mk-newest-cni-337169
	I0408 13:08:06.115429  440985 main.go:141] libmachine: (newest-cni-337169) DBG | I0408 13:08:06.115327  441060 retry.go:31] will retry after 3.962502289s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.107172819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581690107144085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4676a868-d47e-4830-a2f3-5e460c4f2f08 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.107970824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=393eaba0-f7c4-4bc4-9eae-13e27212512a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.108022576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=393eaba0-f7c4-4bc4-9eae-13e27212512a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.108207295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a,PodSandboxId:5d816e024dad921c8c79784a9ff5eec2403a8cb531cd861641ed8e5fcaf9e9a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580801921399033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040c7a58-258b-4798-8fae-7dc42ce50cac,},Annotations:map[string]string{io.kubernetes.container.hash: 3f42524f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a,PodSandboxId:5ce1be244803e2de952d451a8dd9705fffa60830b33a90c83d1ad424096ba2be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580801043850399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-z56lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132d7297-7ba4-4f7f-bef8-66c67b4ef8f2,},Annotations:map[string]string{io.kubernetes.container.hash: f4b906a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c,PodSandboxId:3c5961baf1a9741f0fcb84160be9e2ddfe8bbbed8b188a362017b2c472e86205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580800913808760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7v2jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0ff09cb3-5ab5-4c6c-96cb-473f3473b06d,},Annotations:map[string]string{io.kubernetes.container.hash: 64a1eda7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50,PodSandboxId:70a5bbfac971a7fce41e7e19159ef3ab47221ce4892b302477bbc2272bd31d9d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1712580799873363941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6365e5a8-345d-4e77-988c-1dcab7b21065,},Annotations:map[string]string{io.kubernetes.container.hash: 1e2277ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887,PodSandboxId:21397eb0eeec3d5beec065178ccd5988a2ea2da97bdd682ad8da9b0abe7af919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580780556149513,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1e19f40a59ac70091be4a0e88cdcf9,},Annotations:map[string]string{io.kubernetes.container.hash: b041b19f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839,PodSandboxId:b32f80755ee4291a73b185b2727b08aa51ab0f6c5e3af7f132a4345a6444c698,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580780479261310,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bd65f702d1e6dfe203c48b0c45374b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc,PodSandboxId:442bb75a728c9ac96bd044f2f31293d53b7f33db0bb4ca16389f1d1189e3e088,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580780468865325,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23950c6d0658066d1ce2f95af947c062,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507,PodSandboxId:c68fa3593ad7d7d5f4cb56a6c1df0ba2bb863038f9e15ca0f556863d56e4ce09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580780467246417,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9906d932be6b4ffd22314e897eab29,},Annotations:map[string]string{io.kubernetes.container.hash: 2509e5b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=393eaba0-f7c4-4bc4-9eae-13e27212512a name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.155871985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11312496-ef72-4e42-bf6e-7a8e445ce92a name=/runtime.v1.RuntimeService/Version
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.156038279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11312496-ef72-4e42-bf6e-7a8e445ce92a name=/runtime.v1.RuntimeService/Version
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.157251849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2d4f960-aa2d-4338-bd80-2095f2f118df name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.157661206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581690157636002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2d4f960-aa2d-4338-bd80-2095f2f118df name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.158290401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e7943d3-94bd-4db9-8b06-6c479d0fb318 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.158351506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e7943d3-94bd-4db9-8b06-6c479d0fb318 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.158541490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a,PodSandboxId:5d816e024dad921c8c79784a9ff5eec2403a8cb531cd861641ed8e5fcaf9e9a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580801921399033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040c7a58-258b-4798-8fae-7dc42ce50cac,},Annotations:map[string]string{io.kubernetes.container.hash: 3f42524f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a,PodSandboxId:5ce1be244803e2de952d451a8dd9705fffa60830b33a90c83d1ad424096ba2be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580801043850399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-z56lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132d7297-7ba4-4f7f-bef8-66c67b4ef8f2,},Annotations:map[string]string{io.kubernetes.container.hash: f4b906a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c,PodSandboxId:3c5961baf1a9741f0fcb84160be9e2ddfe8bbbed8b188a362017b2c472e86205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580800913808760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7v2jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0ff09cb3-5ab5-4c6c-96cb-473f3473b06d,},Annotations:map[string]string{io.kubernetes.container.hash: 64a1eda7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50,PodSandboxId:70a5bbfac971a7fce41e7e19159ef3ab47221ce4892b302477bbc2272bd31d9d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1712580799873363941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6365e5a8-345d-4e77-988c-1dcab7b21065,},Annotations:map[string]string{io.kubernetes.container.hash: 1e2277ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887,PodSandboxId:21397eb0eeec3d5beec065178ccd5988a2ea2da97bdd682ad8da9b0abe7af919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580780556149513,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1e19f40a59ac70091be4a0e88cdcf9,},Annotations:map[string]string{io.kubernetes.container.hash: b041b19f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839,PodSandboxId:b32f80755ee4291a73b185b2727b08aa51ab0f6c5e3af7f132a4345a6444c698,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580780479261310,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bd65f702d1e6dfe203c48b0c45374b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc,PodSandboxId:442bb75a728c9ac96bd044f2f31293d53b7f33db0bb4ca16389f1d1189e3e088,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580780468865325,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23950c6d0658066d1ce2f95af947c062,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507,PodSandboxId:c68fa3593ad7d7d5f4cb56a6c1df0ba2bb863038f9e15ca0f556863d56e4ce09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580780467246417,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9906d932be6b4ffd22314e897eab29,},Annotations:map[string]string{io.kubernetes.container.hash: 2509e5b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e7943d3-94bd-4db9-8b06-6c479d0fb318 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.204163840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f6b314e-aa99-43f3-a9d6-f3581aa849b0 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.204259205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f6b314e-aa99-43f3-a9d6-f3581aa849b0 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.207872945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2c4004d-3753-4d3d-bdd2-6d8db35c0ec7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.208438812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581690208410140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2c4004d-3753-4d3d-bdd2-6d8db35c0ec7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.209086029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17daca85-48d6-4661-9656-38294ee78160 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.209178327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17daca85-48d6-4661-9656-38294ee78160 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.209540927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a,PodSandboxId:5d816e024dad921c8c79784a9ff5eec2403a8cb531cd861641ed8e5fcaf9e9a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580801921399033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040c7a58-258b-4798-8fae-7dc42ce50cac,},Annotations:map[string]string{io.kubernetes.container.hash: 3f42524f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a,PodSandboxId:5ce1be244803e2de952d451a8dd9705fffa60830b33a90c83d1ad424096ba2be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580801043850399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-z56lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132d7297-7ba4-4f7f-bef8-66c67b4ef8f2,},Annotations:map[string]string{io.kubernetes.container.hash: f4b906a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c,PodSandboxId:3c5961baf1a9741f0fcb84160be9e2ddfe8bbbed8b188a362017b2c472e86205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580800913808760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7v2jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0ff09cb3-5ab5-4c6c-96cb-473f3473b06d,},Annotations:map[string]string{io.kubernetes.container.hash: 64a1eda7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50,PodSandboxId:70a5bbfac971a7fce41e7e19159ef3ab47221ce4892b302477bbc2272bd31d9d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1712580799873363941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6365e5a8-345d-4e77-988c-1dcab7b21065,},Annotations:map[string]string{io.kubernetes.container.hash: 1e2277ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887,PodSandboxId:21397eb0eeec3d5beec065178ccd5988a2ea2da97bdd682ad8da9b0abe7af919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580780556149513,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1e19f40a59ac70091be4a0e88cdcf9,},Annotations:map[string]string{io.kubernetes.container.hash: b041b19f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839,PodSandboxId:b32f80755ee4291a73b185b2727b08aa51ab0f6c5e3af7f132a4345a6444c698,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580780479261310,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bd65f702d1e6dfe203c48b0c45374b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc,PodSandboxId:442bb75a728c9ac96bd044f2f31293d53b7f33db0bb4ca16389f1d1189e3e088,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580780468865325,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23950c6d0658066d1ce2f95af947c062,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507,PodSandboxId:c68fa3593ad7d7d5f4cb56a6c1df0ba2bb863038f9e15ca0f556863d56e4ce09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580780467246417,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9906d932be6b4ffd22314e897eab29,},Annotations:map[string]string{io.kubernetes.container.hash: 2509e5b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17daca85-48d6-4661-9656-38294ee78160 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.255821098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ab0fb94-680c-44de-8ba7-9748fa8cae8d name=/runtime.v1.RuntimeService/Version
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.256115540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ab0fb94-680c-44de-8ba7-9748fa8cae8d name=/runtime.v1.RuntimeService/Version
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.257543083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=707ff8cd-7ea0-453b-9e84-a4107852de05 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.258041107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581690258010281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=707ff8cd-7ea0-453b-9e84-a4107852de05 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.258590012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b06ca8f5-a88c-4877-a126-ddcbef54c5bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.258639151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b06ca8f5-a88c-4877-a126-ddcbef54c5bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:08:10 default-k8s-diff-port-527454 crio[730]: time="2024-04-08 13:08:10.258834692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a,PodSandboxId:5d816e024dad921c8c79784a9ff5eec2403a8cb531cd861641ed8e5fcaf9e9a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712580801921399033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 040c7a58-258b-4798-8fae-7dc42ce50cac,},Annotations:map[string]string{io.kubernetes.container.hash: 3f42524f,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a,PodSandboxId:5ce1be244803e2de952d451a8dd9705fffa60830b33a90c83d1ad424096ba2be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580801043850399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-z56lf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 132d7297-7ba4-4f7f-bef8-66c67b4ef8f2,},Annotations:map[string]string{io.kubernetes.container.hash: f4b906a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c,PodSandboxId:3c5961baf1a9741f0fcb84160be9e2ddfe8bbbed8b188a362017b2c472e86205,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712580800913808760,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7v2jc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 0ff09cb3-5ab5-4c6c-96cb-473f3473b06d,},Annotations:map[string]string{io.kubernetes.container.hash: 64a1eda7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50,PodSandboxId:70a5bbfac971a7fce41e7e19159ef3ab47221ce4892b302477bbc2272bd31d9d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1712580799873363941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tlhff,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6365e5a8-345d-4e77-988c-1dcab7b21065,},Annotations:map[string]string{io.kubernetes.container.hash: 1e2277ad,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887,PodSandboxId:21397eb0eeec3d5beec065178ccd5988a2ea2da97bdd682ad8da9b0abe7af919,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712580780556149513,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec1e19f40a59ac70091be4a0e88cdcf9,},Annotations:map[string]string{io.kubernetes.container.hash: b041b19f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839,PodSandboxId:b32f80755ee4291a73b185b2727b08aa51ab0f6c5e3af7f132a4345a6444c698,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712580780479261310,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95bd65f702d1e6dfe203c48b0c45374b,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc,PodSandboxId:442bb75a728c9ac96bd044f2f31293d53b7f33db0bb4ca16389f1d1189e3e088,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712580780468865325,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23950c6d0658066d1ce2f95af947c062,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507,PodSandboxId:c68fa3593ad7d7d5f4cb56a6c1df0ba2bb863038f9e15ca0f556863d56e4ce09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712580780467246417,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-527454,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b9906d932be6b4ffd22314e897eab29,},Annotations:map[string]string{io.kubernetes.container.hash: 2509e5b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b06ca8f5-a88c-4877-a126-ddcbef54c5bf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ab9b8ad3dd0b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   5d816e024dad9       storage-provisioner
	152b1f090e632       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   5ce1be244803e       coredns-76f75df574-z56lf
	d5be5b73f4749       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   3c5961baf1a97       coredns-76f75df574-7v2jc
	7674f0c7c9a53       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   14 minutes ago      Running             kube-proxy                0                   70a5bbfac971a       kube-proxy-tlhff
	45beb8e8d0672       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   21397eb0eeec3       etcd-default-k8s-diff-port-527454
	e63266466add0       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   15 minutes ago      Running             kube-scheduler            2                   b32f80755ee42       kube-scheduler-default-k8s-diff-port-527454
	e76cf4cd181c5       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   15 minutes ago      Running             kube-controller-manager   2                   442bb75a728c9       kube-controller-manager-default-k8s-diff-port-527454
	0b459dab2129e       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   15 minutes ago      Running             kube-apiserver            2                   c68fa3593ad7d       kube-apiserver-default-k8s-diff-port-527454
	
	
	==> coredns [152b1f090e63285141bf3a2a17b9e861b0ea4a18efd773057ed17e0e0271b67a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d5be5b73f474930da99650220f7b1eec28cbc806ea0e0dbea49950330062701c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-527454
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-527454
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517
	                    minikube.k8s.io/name=default-k8s-diff-port-527454
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_08T12_53_07_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Apr 2024 12:53:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-527454
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Apr 2024 13:08:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Apr 2024 13:03:38 +0000   Mon, 08 Apr 2024 12:53:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Apr 2024 13:03:38 +0000   Mon, 08 Apr 2024 12:53:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Apr 2024 13:03:38 +0000   Mon, 08 Apr 2024 12:53:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Apr 2024 13:03:38 +0000   Mon, 08 Apr 2024 12:53:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    default-k8s-diff-port-527454
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 70aaa6404194440f92b37b0f9932f978
	  System UUID:                70aaa640-4194-440f-92b3-7b0f9932f978
	  Boot ID:                    400b1cc2-5095-46c2-bd20-ea1e3e6c2916
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-7v2jc                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-76f75df574-z56lf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-527454                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-527454             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-527454    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-tlhff                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-527454             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-jqbmw                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node default-k8s-diff-port-527454 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node default-k8s-diff-port-527454 event: Registered Node default-k8s-diff-port-527454 in Controller
	
	
	==> dmesg <==
	[  +0.054689] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044318] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.063005] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.088542] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.699124] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 8 12:48] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.061519] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071382] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.188772] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.158188] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.332966] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.884855] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.068096] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.366243] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.679324] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.358293] kauditd_printk_skb: 79 callbacks suppressed
	[Apr 8 12:52] kauditd_printk_skb: 4 callbacks suppressed
	[  +2.331574] systemd-fstab-generator[3576]: Ignoring "noauto" option for root device
	[Apr 8 12:53] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.875931] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[ +12.897299] systemd-fstab-generator[4088]: Ignoring "noauto" option for root device
	[  +0.141509] kauditd_printk_skb: 14 callbacks suppressed
	[Apr 8 12:54] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [45beb8e8d0672738ab6f2162d997c82db76955b4e078c2845deccba09c8da887] <==
	{"level":"info","ts":"2024-04-08T12:53:00.914364Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-04-08T12:53:01.377983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-08T12:53:01.37804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-08T12:53:01.378073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgPreVoteResp from 856b77cd5251110c at term 1"}
	{"level":"info","ts":"2024-04-08T12:53:01.37809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became candidate at term 2"}
	{"level":"info","ts":"2024-04-08T12:53:01.378099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-04-08T12:53:01.378107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 2"}
	{"level":"info","ts":"2024-04-08T12:53:01.378115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-04-08T12:53:01.382206Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:default-k8s-diff-port-527454 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-08T12:53:01.382377Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:53:01.382527Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:53:01.388312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-08T12:53:01.399987Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-08T12:53:01.400044Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-08T12:53:01.400105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:53:01.400194Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:53:01.400236Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-08T12:53:01.400596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	{"level":"info","ts":"2024-04-08T12:53:01.406725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-08T13:03:01.532184Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":713}
	{"level":"info","ts":"2024-04-08T13:03:01.542722Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":713,"took":"9.483183ms","hash":1661011617,"current-db-size-bytes":2195456,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2195456,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-08T13:03:01.542826Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1661011617,"revision":713,"compact-revision":-1}
	{"level":"info","ts":"2024-04-08T13:08:01.541306Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2024-04-08T13:08:01.546355Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":957,"took":"4.653987ms","hash":99237640,"current-db-size-bytes":2195456,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-08T13:08:01.546413Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":99237640,"revision":957,"compact-revision":713}
	
	
	==> kernel <==
	 13:08:10 up 20 min,  0 users,  load average: 0.26, 0.19, 0.14
	Linux default-k8s-diff-port-527454 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b459dab2129ebd6f2bc19e8e6d46c3d7b1d9b193c46dfc92f61e9eab176a507] <==
	I0408 13:03:04.457291       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:04:04.456669       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:04:04.456998       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:04:04.457035       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:04:04.457885       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:04:04.458046       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:04:04.458076       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:06:04.457613       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:06:04.458092       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:06:04.458137       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:06:04.458239       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:06:04.458338       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:06:04.459960       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:08:03.458826       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:08:03.459243       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0408 13:08:04.459475       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:08:04.459538       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0408 13:08:04.459552       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0408 13:08:04.459614       1 handler_proxy.go:93] no RequestInfo found in the context
	E0408 13:08:04.459761       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0408 13:08:04.461641       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e76cf4cd181c5cfca75124eb1c75e622099ad40168774404bbdb45254caf0abc] <==
	I0408 13:02:19.229112       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:02:48.737143       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:02:49.237235       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:03:18.743893       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:03:19.246694       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:03:48.749848       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:03:49.254787       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:04:18.756703       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:04:19.264623       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0408 13:04:32.082321       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="326.412µs"
	I0408 13:04:45.087998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="192.648µs"
	E0408 13:04:48.762213       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:04:49.273171       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:05:18.769093       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:05:19.282558       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:05:48.775152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:05:49.290935       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:06:18.781711       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:06:19.299945       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:06:48.788469       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:06:49.311239       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:07:18.796353       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:07:19.320116       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0408 13:07:48.802629       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0408 13:07:49.328402       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7674f0c7c9a53f0d728f14537abd12990578e06bda2231bed55a64db3d356d50] <==
	I0408 12:53:20.269360       1 server_others.go:72] "Using iptables proxy"
	I0408 12:53:20.288748       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.7"]
	I0408 12:53:20.378273       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0408 12:53:20.378357       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 12:53:20.378375       1 server_others.go:168] "Using iptables Proxier"
	I0408 12:53:20.383185       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0408 12:53:20.383467       1 server.go:865] "Version info" version="v1.29.3"
	I0408 12:53:20.383498       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 12:53:20.384811       1 config.go:188] "Starting service config controller"
	I0408 12:53:20.384874       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0408 12:53:20.385032       1 config.go:97] "Starting endpoint slice config controller"
	I0408 12:53:20.385039       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0408 12:53:20.385838       1 config.go:315] "Starting node config controller"
	I0408 12:53:20.385846       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0408 12:53:20.490369       1 shared_informer.go:318] Caches are synced for node config
	I0408 12:53:20.490418       1 shared_informer.go:318] Caches are synced for service config
	I0408 12:53:20.490447       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e63266466add00072a06e369c9b6dcf1ab9c59e68a728a27c4ad39b476ee9839] <==
	W0408 12:53:04.297261       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0408 12:53:04.297371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0408 12:53:04.316105       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0408 12:53:04.316199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0408 12:53:04.431485       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 12:53:04.431601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0408 12:53:04.662189       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0408 12:53:04.663749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0408 12:53:04.666406       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 12:53:04.666489       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0408 12:53:04.683290       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0408 12:53:04.683347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0408 12:53:04.685178       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0408 12:53:04.685222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0408 12:53:04.685382       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 12:53:04.686131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0408 12:53:04.715670       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0408 12:53:04.715722       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0408 12:53:04.738353       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 12:53:04.738400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0408 12:53:04.752529       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 12:53:04.752585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0408 12:53:04.796339       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 12:53:04.796397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0408 12:53:07.749218       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 08 13:06:04 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:06:04.063744    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:06:07 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:06:07.127730    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:06:07 default-k8s-diff-port-527454 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:06:07 default-k8s-diff-port-527454 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:06:07 default-k8s-diff-port-527454 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:06:07 default-k8s-diff-port-527454 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:06:18 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:06:18.064652    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:06:30 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:06:30.063591    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:06:41 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:06:41.065407    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:06:54 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:06:54.063829    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:07:07 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:07:07.125818    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:07:07 default-k8s-diff-port-527454 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:07:07 default-k8s-diff-port-527454 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:07:07 default-k8s-diff-port-527454 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:07:07 default-k8s-diff-port-527454 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 13:07:08 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:07:08.064853    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:07:23 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:07:23.063775    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:07:37 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:07:37.064083    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:07:48 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:07:48.063324    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:07:59 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:07:59.063142    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-jqbmw" podUID="f2c5e235-6807-4248-81ff-a5e49c8a753b"
	Apr 08 13:08:07 default-k8s-diff-port-527454 kubelet[3902]: E0408 13:08:07.126432    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 08 13:08:07 default-k8s-diff-port-527454 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 13:08:07 default-k8s-diff-port-527454 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 13:08:07 default-k8s-diff-port-527454 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 13:08:07 default-k8s-diff-port-527454 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [ab9b8ad3dd0b9c9dc0d80acd7abb0ee3a8d43c3c1fb783e2380da2d58a78193a] <==
	I0408 12:53:22.050163       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0408 12:53:22.060236       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0408 12:53:22.060350       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0408 12:53:22.084140       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0408 12:53:22.084354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-527454_877718d7-2eb4-4d19-a7da-a516b03067da!
	I0408 12:53:22.085661       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7757a94-2e1b-45e7-907e-77fa413779b0", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-527454_877718d7-2eb4-4d19-a7da-a516b03067da became leader
	I0408 12:53:22.184872       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-527454_877718d7-2eb4-4d19-a7da-a516b03067da!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-527454 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-jqbmw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-527454 describe pod metrics-server-57f55c9bc5-jqbmw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-527454 describe pod metrics-server-57f55c9bc5-jqbmw: exit status 1 (74.142361ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-jqbmw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-527454 describe pod metrics-server-57f55c9bc5-jqbmw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (342.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (108.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:05:29.654986  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/custom-flannel-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:06:11.828222  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/bridge-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
E0408 13:06:24.010107  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/flannel-583253/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.245:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.245:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 2 (257.881933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-384148" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-384148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-384148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.556µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-384148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 2 (254.437286ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-384148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-384148 logs -n 25: (1.643081311s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo cat                                               |                              |         |                |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo containerd config dump                            |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl status crio                             |                              |         |                |                     |                     |
	|         | --all --full --no-pager                                |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo systemctl cat crio                                |                              |         |                |                     |                     |
	|         | --no-pager                                             |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |                |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |                |                     |                     |
	|         | \;                                                     |                              |         |                |                     |                     |
	| ssh     | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | sudo crio config                                       |                              |         |                |                     |                     |
	| delete  | -p enable-default-cni-583253                           | enable-default-cni-583253    | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	| delete  | -p                                                     | disable-driver-mounts-122490 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:38 UTC |
	|         | disable-driver-mounts-122490                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:38 UTC | 08 Apr 24 12:39 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-527454  | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-135234             | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-488947            | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC | 08 Apr 24 12:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-384148        | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-527454       | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-135234                  | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-527454 | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:53 UTC |
	|         | default-k8s-diff-port-527454                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| start   | -p no-preload-135234                                   | no-preload-135234            | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:51 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-488947                 | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-488947                                  | embed-certs-488947           | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:52 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-384148             | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC | 08 Apr 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-384148                              | old-k8s-version-384148       | jenkins | v1.33.0-beta.0 | 08 Apr 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 12:42:31
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 12:42:31.610028  433881 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:42:31.610291  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610300  433881 out.go:304] Setting ErrFile to fd 2...
	I0408 12:42:31.610304  433881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:42:31.610590  433881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:42:31.611834  433881 out.go:298] Setting JSON to false
	I0408 12:42:31.613323  433881 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":8695,"bootTime":1712571457,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:42:31.613413  433881 start.go:139] virtualization: kvm guest
	I0408 12:42:31.615441  433881 out.go:177] * [old-k8s-version-384148] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:42:31.617429  433881 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:42:31.617459  433881 notify.go:220] Checking for updates...
	I0408 12:42:31.618918  433881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:42:31.620434  433881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:42:31.621883  433881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:42:31.623381  433881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:42:31.624858  433881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:42:31.626731  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:42:31.627141  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.627193  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.642980  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0408 12:42:31.643395  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.644144  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.644166  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.644557  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.644768  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.646980  433881 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0408 12:42:31.648378  433881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:42:31.648694  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:42:31.648732  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:42:31.663924  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0408 12:42:31.664361  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:42:31.664884  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:42:31.664910  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:42:31.665218  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:42:31.665445  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:42:31.701652  433881 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 12:42:31.703025  433881 start.go:297] selected driver: kvm2
	I0408 12:42:31.703041  433881 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.703192  433881 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:42:31.703924  433881 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.704018  433881 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 12:42:31.719599  433881 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 12:42:31.720001  433881 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:42:31.720084  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:42:31.720102  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:42:31.720156  433881 start.go:340] cluster config:
	{Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:42:31.720330  433881 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 12:42:31.722299  433881 out.go:177] * Starting "old-k8s-version-384148" primary control-plane node in "old-k8s-version-384148" cluster
	I0408 12:42:31.723540  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:42:31.723577  433881 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 12:42:31.723594  433881 cache.go:56] Caching tarball of preloaded images
	I0408 12:42:31.723718  433881 preload.go:173] Found /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0408 12:42:31.723733  433881 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0408 12:42:31.723846  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:42:31.724039  433881 start.go:360] acquireMachinesLock for old-k8s-version-384148: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:42:32.207974  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:38.288048  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:41.359947  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:47.439972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:50.512009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:56.591982  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:42:59.664002  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:05.744032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:08.816017  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:14.895990  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:17.967942  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:24.048010  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:27.119964  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:33.200067  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:36.272037  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:42.351972  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:45.424082  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:51.503992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:43:54.576088  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:00.656001  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:03.728079  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:09.807949  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:12.880051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:18.960024  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:22.032036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:28.112053  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:31.183992  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:37.264032  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:40.336026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:46.416019  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:49.487998  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:55.568026  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:44:58.640044  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:04.719978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:07.792028  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:13.871997  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:16.944057  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:23.023969  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:26.096051  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:32.176049  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:35.247929  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:41.328036  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:44.399954  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:50.480046  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:53.552034  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:45:59.632009  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:02.704063  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:08.784031  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:11.856098  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:17.936013  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:21.007970  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:27.087978  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:30.159984  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:36.240042  433439 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0408 12:46:39.245220  433557 start.go:364] duration metric: took 4m33.298555643s to acquireMachinesLock for "no-preload-135234"
	I0408 12:46:39.245298  433557 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:39.245311  433557 fix.go:54] fixHost starting: 
	I0408 12:46:39.245782  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:39.245821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:39.261035  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35301
	I0408 12:46:39.261632  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:39.262208  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:46:39.262234  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:39.262592  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:39.262819  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:39.262938  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:46:39.264995  433557 fix.go:112] recreateIfNeeded on no-preload-135234: state=Stopped err=<nil>
	I0408 12:46:39.265029  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	W0408 12:46:39.265203  433557 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:39.266971  433557 out.go:177] * Restarting existing kvm2 VM for "no-preload-135234" ...
	I0408 12:46:39.268140  433557 main.go:141] libmachine: (no-preload-135234) Calling .Start
	I0408 12:46:39.268315  433557 main.go:141] libmachine: (no-preload-135234) Ensuring networks are active...
	I0408 12:46:39.269323  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network default is active
	I0408 12:46:39.269669  433557 main.go:141] libmachine: (no-preload-135234) Ensuring network mk-no-preload-135234 is active
	I0408 12:46:39.270047  433557 main.go:141] libmachine: (no-preload-135234) Getting domain xml...
	I0408 12:46:39.270763  433557 main.go:141] libmachine: (no-preload-135234) Creating domain...
	I0408 12:46:40.496145  433557 main.go:141] libmachine: (no-preload-135234) Waiting to get IP...
	I0408 12:46:40.497357  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.497870  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.497950  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.497853  434768 retry.go:31] will retry after 305.764185ms: waiting for machine to come up
	I0408 12:46:40.805894  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:40.806351  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:40.806380  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:40.806304  434768 retry.go:31] will retry after 359.02584ms: waiting for machine to come up
	I0408 12:46:39.242442  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:39.242498  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.242871  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:46:39.242935  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:46:39.243206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:46:39.245063  433439 machine.go:97] duration metric: took 4m37.367683512s to provisionDockerMachine
	I0408 12:46:39.245112  433439 fix.go:56] duration metric: took 4m37.391017413s for fixHost
	I0408 12:46:39.245118  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 4m37.391040241s
	W0408 12:46:39.245140  433439 start.go:713] error starting host: provision: host is not running
	W0408 12:46:39.245388  433439 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0408 12:46:39.245401  433439 start.go:728] Will try again in 5 seconds ...
	I0408 12:46:41.167272  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.167748  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.167779  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.167702  434768 retry.go:31] will retry after 412.762727ms: waiting for machine to come up
	I0408 12:46:41.582454  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:41.582959  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:41.582990  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:41.582904  434768 retry.go:31] will retry after 572.486121ms: waiting for machine to come up
	I0408 12:46:42.156830  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.157270  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.157294  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.157243  434768 retry.go:31] will retry after 706.130574ms: waiting for machine to come up
	I0408 12:46:42.865325  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:42.865829  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:42.865863  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:42.865762  434768 retry.go:31] will retry after 901.114252ms: waiting for machine to come up
	I0408 12:46:43.768578  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:43.769067  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:43.769103  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:43.769032  434768 retry.go:31] will retry after 1.160836088s: waiting for machine to come up
	I0408 12:46:44.931002  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:44.931408  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:44.931438  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:44.931349  434768 retry.go:31] will retry after 998.940623ms: waiting for machine to come up
	I0408 12:46:44.247774  433439 start.go:360] acquireMachinesLock for default-k8s-diff-port-527454: {Name:mk652c18a262a3ba7b953c47749e248ec98d2e54 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 12:46:45.931728  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:45.932157  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:45.932241  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:45.932115  434768 retry.go:31] will retry after 1.43975568s: waiting for machine to come up
	I0408 12:46:47.373294  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:47.373786  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:47.373821  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:47.373733  434768 retry.go:31] will retry after 1.828434336s: waiting for machine to come up
	I0408 12:46:49.205019  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:49.205414  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:49.205462  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:49.205376  434768 retry.go:31] will retry after 2.847051956s: waiting for machine to come up
	I0408 12:46:52.055004  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:52.055561  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:52.055586  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:52.055517  434768 retry.go:31] will retry after 2.941262871s: waiting for machine to come up
	I0408 12:46:54.998158  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:54.998598  433557 main.go:141] libmachine: (no-preload-135234) DBG | unable to find current IP address of domain no-preload-135234 in network mk-no-preload-135234
	I0408 12:46:54.998631  433557 main.go:141] libmachine: (no-preload-135234) DBG | I0408 12:46:54.998542  434768 retry.go:31] will retry after 3.082026915s: waiting for machine to come up
	I0408 12:46:59.561049  433674 start.go:364] duration metric: took 4m43.922045129s to acquireMachinesLock for "embed-certs-488947"
	I0408 12:46:59.561130  433674 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:46:59.561140  433674 fix.go:54] fixHost starting: 
	I0408 12:46:59.561636  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:46:59.561683  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:46:59.578117  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I0408 12:46:59.578573  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:46:59.579047  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:46:59.579074  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:46:59.579432  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:46:59.579633  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:46:59.579852  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:46:59.581445  433674 fix.go:112] recreateIfNeeded on embed-certs-488947: state=Stopped err=<nil>
	I0408 12:46:59.581492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	W0408 12:46:59.581667  433674 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:46:59.584306  433674 out.go:177] * Restarting existing kvm2 VM for "embed-certs-488947" ...
	I0408 12:46:59.585750  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Start
	I0408 12:46:59.585971  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring networks are active...
	I0408 12:46:59.586749  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network default is active
	I0408 12:46:59.587136  433674 main.go:141] libmachine: (embed-certs-488947) Ensuring network mk-embed-certs-488947 is active
	I0408 12:46:59.587551  433674 main.go:141] libmachine: (embed-certs-488947) Getting domain xml...
	I0408 12:46:59.588302  433674 main.go:141] libmachine: (embed-certs-488947) Creating domain...
	I0408 12:46:58.084025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084608  433557 main.go:141] libmachine: (no-preload-135234) Found IP for machine: 192.168.61.48
	I0408 12:46:58.084660  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has current primary IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.084668  433557 main.go:141] libmachine: (no-preload-135234) Reserving static IP address...
	I0408 12:46:58.085160  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.085198  433557 main.go:141] libmachine: (no-preload-135234) Reserved static IP address: 192.168.61.48
	I0408 12:46:58.085213  433557 main.go:141] libmachine: (no-preload-135234) DBG | skip adding static IP to network mk-no-preload-135234 - found existing host DHCP lease matching {name: "no-preload-135234", mac: "52:54:00:9e:80:06", ip: "192.168.61.48"}
	I0408 12:46:58.085229  433557 main.go:141] libmachine: (no-preload-135234) DBG | Getting to WaitForSSH function...
	I0408 12:46:58.085240  433557 main.go:141] libmachine: (no-preload-135234) Waiting for SSH to be available...
	I0408 12:46:58.087595  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.087990  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.088025  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.088155  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH client type: external
	I0408 12:46:58.088178  433557 main.go:141] libmachine: (no-preload-135234) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa (-rw-------)
	I0408 12:46:58.088210  433557 main.go:141] libmachine: (no-preload-135234) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:46:58.088228  433557 main.go:141] libmachine: (no-preload-135234) DBG | About to run SSH command:
	I0408 12:46:58.088241  433557 main.go:141] libmachine: (no-preload-135234) DBG | exit 0
	I0408 12:46:58.220043  433557 main.go:141] libmachine: (no-preload-135234) DBG | SSH cmd err, output: <nil>: 
	I0408 12:46:58.220440  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetConfigRaw
	I0408 12:46:58.221216  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.223881  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224184  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.224202  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.224597  433557 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/config.json ...
	I0408 12:46:58.224804  433557 machine.go:94] provisionDockerMachine start ...
	I0408 12:46:58.224828  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:58.225070  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.227668  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228048  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.228080  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.228242  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.228438  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228647  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.228780  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.228941  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.229238  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.229253  433557 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:46:58.344562  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:46:58.344602  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.344888  433557 buildroot.go:166] provisioning hostname "no-preload-135234"
	I0408 12:46:58.344922  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.345147  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.347895  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348278  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.348311  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.348433  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.348638  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348801  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.348911  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.349077  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.349289  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.349303  433557 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-135234 && echo "no-preload-135234" | sudo tee /etc/hostname
	I0408 12:46:58.478959  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-135234
	
	I0408 12:46:58.478996  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.481692  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482164  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.482187  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.482410  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.482643  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.482851  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.483032  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.483230  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:58.483446  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:58.483465  433557 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-135234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-135234/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-135234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:46:58.606022  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:46:58.606059  433557 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:46:58.606080  433557 buildroot.go:174] setting up certificates
	I0408 12:46:58.606092  433557 provision.go:84] configureAuth start
	I0408 12:46:58.606108  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetMachineName
	I0408 12:46:58.606465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:58.609605  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610046  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.610079  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.610238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.612452  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612756  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.612784  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.612905  433557 provision.go:143] copyHostCerts
	I0408 12:46:58.612974  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:46:58.613029  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:46:58.613097  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:46:58.613200  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:46:58.613209  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:46:58.613232  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:46:58.613295  433557 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:46:58.613302  433557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:46:58.613323  433557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:46:58.613438  433557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.no-preload-135234 san=[127.0.0.1 192.168.61.48 localhost minikube no-preload-135234]
	I0408 12:46:58.832264  433557 provision.go:177] copyRemoteCerts
	I0408 12:46:58.832335  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:46:58.832382  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:58.835259  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835609  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:58.835650  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:58.835883  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:58.836158  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:58.836332  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:58.836468  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:58.922968  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:46:58.949601  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 12:46:58.976832  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:46:59.004643  433557 provision.go:87] duration metric: took 398.533019ms to configureAuth
	I0408 12:46:59.004683  433557 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:46:59.004885  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:46:59.004988  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.008264  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008735  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.008783  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.008987  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.009238  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009416  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.009542  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.009680  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.009866  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.009884  433557 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:46:59.299880  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:46:59.299912  433557 machine.go:97] duration metric: took 1.075094362s to provisionDockerMachine
	I0408 12:46:59.299925  433557 start.go:293] postStartSetup for "no-preload-135234" (driver="kvm2")
	I0408 12:46:59.299940  433557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:46:59.299981  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.300373  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:46:59.300406  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.303274  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303769  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.303806  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.303941  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.304222  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.304575  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.304874  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.395808  433557 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:46:59.400795  433557 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:46:59.400831  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:46:59.400914  433557 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:46:59.401021  433557 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:46:59.401162  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:46:59.411883  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:46:59.438486  433557 start.go:296] duration metric: took 138.54299ms for postStartSetup
	I0408 12:46:59.438546  433557 fix.go:56] duration metric: took 20.19323532s for fixHost
	I0408 12:46:59.438577  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.441875  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442334  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.442366  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.442528  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.442753  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.442969  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.443101  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.443232  433557 main.go:141] libmachine: Using SSH client type: native
	I0408 12:46:59.443414  433557 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0408 12:46:59.443424  433557 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:46:59.560853  433557 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580419.531854515
	
	I0408 12:46:59.560881  433557 fix.go:216] guest clock: 1712580419.531854515
	I0408 12:46:59.560891  433557 fix.go:229] Guest: 2024-04-08 12:46:59.531854515 +0000 UTC Remote: 2024-04-08 12:46:59.438552641 +0000 UTC m=+293.653384531 (delta=93.301874ms)
	I0408 12:46:59.560918  433557 fix.go:200] guest clock delta is within tolerance: 93.301874ms
	I0408 12:46:59.560929  433557 start.go:83] releasing machines lock for "no-preload-135234", held for 20.315655744s
	I0408 12:46:59.560965  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.561244  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:46:59.564248  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564623  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.564658  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.564758  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565245  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565434  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:46:59.565524  433557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:46:59.565571  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.565726  433557 ssh_runner.go:195] Run: cat /version.json
	I0408 12:46:59.565752  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:46:59.568339  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568729  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568766  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.568789  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.568931  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569139  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569201  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:46:59.569227  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:46:59.569300  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:46:59.569392  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569486  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:46:59.569647  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.569782  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:46:59.569900  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:46:59.689264  433557 ssh_runner.go:195] Run: systemctl --version
	I0408 12:46:59.695704  433557 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:46:59.848323  433557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:46:59.856068  433557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:46:59.856171  433557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:46:59.877460  433557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:46:59.877490  433557 start.go:494] detecting cgroup driver to use...
	I0408 12:46:59.877557  433557 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:46:59.895329  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:46:59.910849  433557 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:46:59.910908  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:46:59.925541  433557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:46:59.941511  433557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:00.064454  433557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:00.218535  433557 docker.go:233] disabling docker service ...
	I0408 12:47:00.218614  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:00.234510  433557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:00.249703  433557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:00.403556  433557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:00.569324  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:00.585058  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:00.607536  433557 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:00.607592  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.624701  433557 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:00.624774  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.637414  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.649846  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.662725  433557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:00.675738  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.688667  433557 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.710326  433557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:00.722619  433557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:00.734130  433557 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:00.734227  433557 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:00.749998  433557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:00.761556  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:00.881544  433557 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:01.036952  433557 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:01.037040  433557 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:01.042260  433557 start.go:562] Will wait 60s for crictl version
	I0408 12:47:01.042329  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.046327  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:01.092359  433557 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:01.092465  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.127373  433557 ssh_runner.go:195] Run: crio --version
	I0408 12:47:01.165027  433557 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0408 12:47:00.888196  433674 main.go:141] libmachine: (embed-certs-488947) Waiting to get IP...
	I0408 12:47:00.889196  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:00.889766  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:00.889808  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:00.889702  434916 retry.go:31] will retry after 239.282192ms: waiting for machine to come up
	I0408 12:47:01.130508  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.131075  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.131111  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.131016  434916 retry.go:31] will retry after 388.837258ms: waiting for machine to come up
	I0408 12:47:01.522006  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.522413  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.522444  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.522364  434916 retry.go:31] will retry after 372.310428ms: waiting for machine to come up
	I0408 12:47:01.896325  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:01.896919  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:01.896954  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:01.896851  434916 retry.go:31] will retry after 574.930775ms: waiting for machine to come up
	I0408 12:47:02.474045  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.474626  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.474664  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.474557  434916 retry.go:31] will retry after 506.414729ms: waiting for machine to come up
	I0408 12:47:02.982589  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:02.983203  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:02.983238  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:02.983135  434916 retry.go:31] will retry after 614.351996ms: waiting for machine to come up
	I0408 12:47:03.599165  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:03.599682  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:03.599724  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:03.599640  434916 retry.go:31] will retry after 1.130025801s: waiting for machine to come up
	I0408 12:47:04.731350  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:04.731841  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:04.731874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:04.731791  434916 retry.go:31] will retry after 1.346613974s: waiting for machine to come up
	I0408 12:47:01.166849  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetIP
	I0408 12:47:01.169772  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170183  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:01.170211  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:01.170523  433557 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:01.175336  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:01.193759  433557 kubeadm.go:877] updating cluster {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:01.193949  433557 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 12:47:01.194017  433557 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:01.234439  433557 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0408 12:47:01.234466  433557 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:01.234547  433557 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.234575  433557 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.234589  433557 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.234625  433557 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0408 12:47:01.234576  433557 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.234562  433557 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.234696  433557 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.234554  433557 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.236654  433557 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:01.236678  433557 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.236701  433557 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0408 12:47:01.236686  433557 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.236630  433557 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.236789  433557 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.236623  433557 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.475737  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.476344  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.482596  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.486680  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.490012  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.496685  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0408 12:47:01.510269  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.597119  433557 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0408 12:47:01.597179  433557 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.597238  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696018  433557 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0408 12:47:01.696123  433557 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.696148  433557 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0408 12:47:01.696196  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696201  433557 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.696237  433557 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0408 12:47:01.696254  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.696265  433557 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.696299  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.710260  433557 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0408 12:47:01.710317  433557 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.710369  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799524  433557 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0408 12:47:01.799583  433557 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.799592  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0408 12:47:01.799616  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0408 12:47:01.799626  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0408 12:47:01.799618  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:01.799679  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0408 12:47:01.799734  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0408 12:47:01.916654  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0408 12:47:01.916701  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.916783  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:01.916809  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.923863  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0408 12:47:01.923904  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.923974  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.924021  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:01.924065  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924176  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:01.924067  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:01.926651  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0408 12:47:01.926681  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926722  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0408 12:47:01.926783  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0408 12:47:01.974801  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0408 12:47:01.974875  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974939  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:01.974969  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0408 12:47:01.974944  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0408 12:47:02.062944  433557 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.916991  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.990237597s)
	I0408 12:47:04.917016  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.942055075s)
	I0408 12:47:04.917036  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0408 12:47:04.917040  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0408 12:47:04.917047  433557 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917098  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0408 12:47:04.917117  433557 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.854126587s)
	I0408 12:47:04.917187  433557 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0408 12:47:04.917233  433557 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:04.917278  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:47:06.080429  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:06.080910  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:06.080942  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:06.080866  434916 retry.go:31] will retry after 1.125692215s: waiting for machine to come up
	I0408 12:47:07.208553  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:07.209015  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:07.209040  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:07.208961  434916 retry.go:31] will retry after 1.958080491s: waiting for machine to come up
	I0408 12:47:09.169878  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:09.170289  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:09.170319  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:09.170243  434916 retry.go:31] will retry after 2.241966019s: waiting for machine to come up
	I0408 12:47:08.833969  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.916836964s)
	I0408 12:47:08.834011  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0408 12:47:08.834029  433557 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834032  433557 ssh_runner.go:235] Completed: which crictl: (3.916731005s)
	I0408 12:47:08.834085  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0408 12:47:08.834101  433557 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:11.414435  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:11.414829  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:11.414851  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:11.414786  434916 retry.go:31] will retry after 2.815941766s: waiting for machine to come up
	I0408 12:47:14.233868  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:14.234272  433674 main.go:141] libmachine: (embed-certs-488947) DBG | unable to find current IP address of domain embed-certs-488947 in network mk-embed-certs-488947
	I0408 12:47:14.234318  433674 main.go:141] libmachine: (embed-certs-488947) DBG | I0408 12:47:14.234228  434916 retry.go:31] will retry after 3.213192238s: waiting for machine to come up
	I0408 12:47:10.925471  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.091353526s)
	I0408 12:47:10.925519  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0408 12:47:10.925542  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925581  433557 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.091434251s)
	I0408 12:47:10.925612  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0408 12:47:10.925673  433557 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0408 12:47:10.925782  433557 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:12.405175  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.479529413s)
	I0408 12:47:12.405221  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0408 12:47:12.405238  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:12.405236  433557 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.479424271s)
	I0408 12:47:12.405270  433557 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0408 12:47:12.405296  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0408 12:47:14.283021  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (1.877693108s)
	I0408 12:47:14.283061  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0408 12:47:14.283079  433557 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:14.283143  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0408 12:47:18.781552  433881 start.go:364] duration metric: took 4m47.057472647s to acquireMachinesLock for "old-k8s-version-384148"
	I0408 12:47:18.781636  433881 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:18.781645  433881 fix.go:54] fixHost starting: 
	I0408 12:47:18.782123  433881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:18.782168  433881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:18.804263  433881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0408 12:47:18.804759  433881 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:18.805376  433881 main.go:141] libmachine: Using API Version  1
	I0408 12:47:18.805407  433881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:18.805815  433881 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:18.806091  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:18.806265  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetState
	I0408 12:47:18.809884  433881 fix.go:112] recreateIfNeeded on old-k8s-version-384148: state=Stopped err=<nil>
	I0408 12:47:18.809915  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	W0408 12:47:18.810103  433881 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:18.812906  433881 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-384148" ...
	I0408 12:47:17.451190  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451657  433674 main.go:141] libmachine: (embed-certs-488947) Found IP for machine: 192.168.72.159
	I0408 12:47:17.451705  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has current primary IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.451725  433674 main.go:141] libmachine: (embed-certs-488947) Reserving static IP address...
	I0408 12:47:17.452192  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.452239  433674 main.go:141] libmachine: (embed-certs-488947) Reserved static IP address: 192.168.72.159
	I0408 12:47:17.452259  433674 main.go:141] libmachine: (embed-certs-488947) DBG | skip adding static IP to network mk-embed-certs-488947 - found existing host DHCP lease matching {name: "embed-certs-488947", mac: "52:54:00:f4:fc:17", ip: "192.168.72.159"}
	I0408 12:47:17.452282  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Getting to WaitForSSH function...
	I0408 12:47:17.452297  433674 main.go:141] libmachine: (embed-certs-488947) Waiting for SSH to be available...
	I0408 12:47:17.454780  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455169  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.455208  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.455335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH client type: external
	I0408 12:47:17.455354  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa (-rw-------)
	I0408 12:47:17.455384  433674 main.go:141] libmachine: (embed-certs-488947) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:17.455401  433674 main.go:141] libmachine: (embed-certs-488947) DBG | About to run SSH command:
	I0408 12:47:17.455414  433674 main.go:141] libmachine: (embed-certs-488947) DBG | exit 0
	I0408 12:47:17.585037  433674 main.go:141] libmachine: (embed-certs-488947) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:17.585443  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetConfigRaw
	I0408 12:47:17.586184  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.589492  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.589953  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.589985  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.590269  433674 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/config.json ...
	I0408 12:47:17.590518  433674 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:17.590550  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:17.590798  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.593968  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594570  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.594615  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.594832  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.595073  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595236  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.595442  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.595661  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.595892  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.595905  433674 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:17.708468  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:17.708504  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.708857  433674 buildroot.go:166] provisioning hostname "embed-certs-488947"
	I0408 12:47:17.708890  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.709083  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.712242  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712698  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.712732  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.712928  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.713122  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713298  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.713433  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.713612  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.713801  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.713817  433674 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-488947 && echo "embed-certs-488947" | sudo tee /etc/hostname
	I0408 12:47:17.842964  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-488947
	
	I0408 12:47:17.843017  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.846436  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.846959  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.846992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.847225  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:17.847486  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847726  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:17.847945  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:17.848182  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:17.848373  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:17.848397  433674 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-488947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-488947/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-488947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:17.975087  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:17.975123  433674 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:17.975178  433674 buildroot.go:174] setting up certificates
	I0408 12:47:17.975198  433674 provision.go:84] configureAuth start
	I0408 12:47:17.975212  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetMachineName
	I0408 12:47:17.975606  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:17.979028  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979483  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.979510  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.979754  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:17.982474  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.982944  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:17.982977  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:17.983174  433674 provision.go:143] copyHostCerts
	I0408 12:47:17.983230  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:17.983240  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:17.983291  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:17.983408  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:17.983419  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:17.983444  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:17.983500  433674 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:17.983507  433674 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:17.983526  433674 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:17.983580  433674 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.embed-certs-488947 san=[127.0.0.1 192.168.72.159 embed-certs-488947 localhost minikube]
	I0408 12:47:18.043022  433674 provision.go:177] copyRemoteCerts
	I0408 12:47:18.043092  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:18.043162  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.046335  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046722  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.046757  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.046904  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.047145  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.047333  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.047475  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.134761  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:18.163745  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0408 12:47:18.192946  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:18.220790  433674 provision.go:87] duration metric: took 245.573885ms to configureAuth
	I0408 12:47:18.220827  433674 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:18.221067  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:47:18.221175  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.224177  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.224805  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.224839  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.225098  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.225363  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225569  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.225797  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.226024  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.226202  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.226219  433674 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:18.522682  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:18.522718  433674 machine.go:97] duration metric: took 932.18024ms to provisionDockerMachine
	I0408 12:47:18.522735  433674 start.go:293] postStartSetup for "embed-certs-488947" (driver="kvm2")
	I0408 12:47:18.522750  433674 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:18.522776  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.523133  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:18.523174  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.526523  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.526872  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.526903  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.527101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.527336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.527512  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.527692  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.615353  433674 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:18.620414  433674 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:18.620447  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:18.620525  433674 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:18.620627  433674 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:18.620726  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:18.630585  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:18.658952  433674 start.go:296] duration metric: took 136.200863ms for postStartSetup
	I0408 12:47:18.659004  433674 fix.go:56] duration metric: took 19.097863992s for fixHost
	I0408 12:47:18.659037  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.662115  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662571  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.662606  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.662843  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.663100  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663308  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.663480  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.663676  433674 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:18.663919  433674 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.159 22 <nil> <nil>}
	I0408 12:47:18.663939  433674 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:18.781355  433674 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580438.730334929
	
	I0408 12:47:18.781402  433674 fix.go:216] guest clock: 1712580438.730334929
	I0408 12:47:18.781427  433674 fix.go:229] Guest: 2024-04-08 12:47:18.730334929 +0000 UTC Remote: 2024-04-08 12:47:18.659010209 +0000 UTC m=+303.178294166 (delta=71.32472ms)
	I0408 12:47:18.781457  433674 fix.go:200] guest clock delta is within tolerance: 71.32472ms
	I0408 12:47:18.781465  433674 start.go:83] releasing machines lock for "embed-certs-488947", held for 19.22036189s
	I0408 12:47:18.781502  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.781800  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:18.784825  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785270  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.785313  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.785492  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786104  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:47:18.786456  433674 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:18.786501  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.786626  433674 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:18.786660  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:47:18.789409  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789704  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.789992  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790019  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790149  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790306  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:18.790322  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790338  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:18.790495  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790528  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:47:18.790747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:47:18.790745  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.790867  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:47:18.790997  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:47:18.911025  433674 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:18.917785  433674 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:19.070383  433674 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:19.077521  433674 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:19.077606  433674 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:19.094598  433674 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:19.094636  433674 start.go:494] detecting cgroup driver to use...
	I0408 12:47:19.094750  433674 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:19.111163  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:19.125621  433674 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:19.125688  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:19.141948  433674 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:19.156671  433674 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:19.281688  433674 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:19.455445  433674 docker.go:233] disabling docker service ...
	I0408 12:47:19.455519  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:19.474594  433674 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:19.491301  433674 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:19.646063  433674 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:19.786075  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:19.803535  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:19.829204  433674 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:47:19.829282  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.842132  433674 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:19.842201  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.853915  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.866449  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.879235  433674 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:19.899411  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.920363  433674 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.946414  433674 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:19.958824  433674 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:19.969691  433674 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:19.969754  433674 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:19.986458  433674 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:19.998655  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:20.157494  433674 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:20.318209  433674 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:20.318287  433674 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:20.325414  433674 start.go:562] Will wait 60s for crictl version
	I0408 12:47:20.325490  433674 ssh_runner.go:195] Run: which crictl
	I0408 12:47:20.330070  433674 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:20.383808  433674 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:20.383959  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.417705  433674 ssh_runner.go:195] Run: crio --version
	I0408 12:47:20.454321  433674 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:47:20.456101  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetIP
	I0408 12:47:20.460035  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.460734  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:47:20.460774  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:47:20.461140  433674 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:20.467650  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:20.486936  433674 kubeadm.go:877] updating cluster {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:20.487105  433674 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:47:20.487176  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:20.529152  433674 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:47:20.529293  433674 ssh_runner.go:195] Run: which lz4
	I0408 12:47:16.552712  433557 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.26954566s)
	I0408 12:47:16.552781  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0408 12:47:16.552797  433557 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:16.552839  433557 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0408 12:47:17.512103  433557 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0408 12:47:17.512151  433557 cache_images.go:123] Successfully loaded all cached images
	I0408 12:47:17.512158  433557 cache_images.go:92] duration metric: took 16.277680364s to LoadCachedImages
	I0408 12:47:17.512171  433557 kubeadm.go:928] updating node { 192.168.61.48 8443 v1.30.0-rc.0 crio true true} ...
	I0408 12:47:17.512324  433557 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-135234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:17.512440  433557 ssh_runner.go:195] Run: crio config
	I0408 12:47:17.561382  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:17.561424  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:17.561441  433557 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:17.561472  433557 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-135234 NodeName:no-preload-135234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:17.561681  433557 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-135234"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:17.561807  433557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0408 12:47:17.574237  433557 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:17.574321  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:17.587129  433557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0408 12:47:17.609022  433557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0408 12:47:17.629656  433557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0408 12:47:17.650373  433557 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:17.655031  433557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:17.670872  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:17.811548  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:17.830945  433557 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234 for IP: 192.168.61.48
	I0408 12:47:17.830974  433557 certs.go:194] generating shared ca certs ...
	I0408 12:47:17.831000  433557 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:17.831219  433557 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:17.831277  433557 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:17.831290  433557 certs.go:256] generating profile certs ...
	I0408 12:47:17.831453  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/client.key
	I0408 12:47:17.831521  433557 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key.dbd08c09
	I0408 12:47:17.831577  433557 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key
	I0408 12:47:17.831823  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:17.831891  433557 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:17.831906  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:17.831946  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:17.831978  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:17.832007  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:17.832059  433557 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:17.832899  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:17.869894  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:17.902893  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:17.943547  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:17.990462  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 12:47:18.026697  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:18.055643  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:18.083357  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/no-preload-135234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:47:18.109247  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:18.134513  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:18.161811  433557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:18.189968  433557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:18.210173  433557 ssh_runner.go:195] Run: openssl version
	I0408 12:47:18.216813  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:18.230693  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236461  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.236526  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:18.244183  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:18.257589  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:18.271235  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277004  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.277088  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:18.283549  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:18.296789  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:18.309587  433557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314537  433557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.314608  433557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:18.320942  433557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:18.333407  433557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:18.338637  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:18.345365  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:18.352262  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:18.359464  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:18.366233  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:18.373280  433557 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:18.380134  433557 kubeadm.go:391] StartCluster: {Name:no-preload-135234 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-135234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:18.380291  433557 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:18.380403  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.423068  433557 cri.go:89] found id: ""
	I0408 12:47:18.423164  433557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:18.435458  433557 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:18.435497  433557 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:18.435503  433557 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:18.435562  433557 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:18.447509  433557 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:18.448720  433557 kubeconfig.go:125] found "no-preload-135234" server: "https://192.168.61.48:8443"
	I0408 12:47:18.451154  433557 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:18.463246  433557 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.48
	I0408 12:47:18.463299  433557 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:18.463315  433557 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:18.463394  433557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:18.522929  433557 cri.go:89] found id: ""
	I0408 12:47:18.523011  433557 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:18.546346  433557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:18.558613  433557 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:18.558640  433557 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:18.558714  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:18.570020  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:18.570106  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:18.581323  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:18.593718  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:18.593778  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:18.606889  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.619251  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:18.619320  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:18.632343  433557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:18.644913  433557 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:18.645004  433557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:18.656965  433557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:18.670774  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:18.785507  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:19.988135  433557 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.202584017s)
	I0408 12:47:19.988174  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.235430  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.316709  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:20.456307  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:20.456393  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:18.814842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .Start
	I0408 12:47:18.815096  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring networks are active...
	I0408 12:47:18.816155  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network default is active
	I0408 12:47:18.816608  433881 main.go:141] libmachine: (old-k8s-version-384148) Ensuring network mk-old-k8s-version-384148 is active
	I0408 12:47:18.817061  433881 main.go:141] libmachine: (old-k8s-version-384148) Getting domain xml...
	I0408 12:47:18.817951  433881 main.go:141] libmachine: (old-k8s-version-384148) Creating domain...
	I0408 12:47:20.144750  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting to get IP...
	I0408 12:47:20.145850  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.146334  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.146403  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.146320  435057 retry.go:31] will retry after 230.92081ms: waiting for machine to come up
	I0408 12:47:20.378905  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.379518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.379572  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.379474  435057 retry.go:31] will retry after 383.208004ms: waiting for machine to come up
	I0408 12:47:20.764287  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:20.764883  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:20.764936  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:20.764858  435057 retry.go:31] will retry after 430.674899ms: waiting for machine to come up
	I0408 12:47:21.197738  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.198231  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.198255  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.198190  435057 retry.go:31] will retry after 553.905508ms: waiting for machine to come up
	I0408 12:47:20.534154  433674 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:20.538991  433674 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:20.539034  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:47:22.249270  433674 crio.go:462] duration metric: took 1.715182486s to copy over tarball
	I0408 12:47:22.249391  433674 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:24.966695  433674 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.717265287s)
	I0408 12:47:24.966730  433674 crio.go:469] duration metric: took 2.717416948s to extract the tarball
	I0408 12:47:24.966740  433674 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:25.007656  433674 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:25.063445  433674 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:47:25.063482  433674 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:47:25.063494  433674 kubeadm.go:928] updating node { 192.168.72.159 8443 v1.29.3 crio true true} ...
	I0408 12:47:25.063627  433674 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-488947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:25.063745  433674 ssh_runner.go:195] Run: crio config
	I0408 12:47:25.122219  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:25.122282  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:25.122298  433674 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:25.122330  433674 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.159 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-488947 NodeName:embed-certs-488947 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:47:25.122556  433674 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-488947"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:25.122633  433674 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:47:25.137001  433674 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:25.137148  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:25.151168  433674 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0408 12:47:25.171698  433674 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:25.195101  433674 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0408 12:47:25.216873  433674 ssh_runner.go:195] Run: grep 192.168.72.159	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:25.221155  433674 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:25.235740  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:25.354135  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:25.377763  433674 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947 for IP: 192.168.72.159
	I0408 12:47:25.377801  433674 certs.go:194] generating shared ca certs ...
	I0408 12:47:25.377824  433674 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:25.378055  433674 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:25.378137  433674 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:25.378161  433674 certs.go:256] generating profile certs ...
	I0408 12:47:25.378299  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/client.key
	I0408 12:47:25.378391  433674 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key.21d2a89c
	I0408 12:47:25.378460  433674 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key
	I0408 12:47:25.378628  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:25.378687  433674 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:25.378702  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:25.378736  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:25.378780  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:25.378818  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:25.378888  433674 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:25.379800  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:25.422370  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:25.468967  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:25.516750  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:20.956916  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.456948  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.957498  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:21.982763  433557 api_server.go:72] duration metric: took 1.526450888s to wait for apiserver process to appear ...
	I0408 12:47:21.982797  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:21.982852  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.363696  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.363732  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.363758  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:25.398003  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:25.398065  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:25.483280  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:21.754065  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:21.754814  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:21.754849  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:21.754719  435057 retry.go:31] will retry after 678.896106ms: waiting for machine to come up
	I0408 12:47:22.435899  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:22.436481  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:22.436518  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:22.436426  435057 retry.go:31] will retry after 624.721191ms: waiting for machine to come up
	I0408 12:47:23.063619  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:23.064268  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:23.064290  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:23.064183  435057 retry.go:31] will retry after 1.072067437s: waiting for machine to come up
	I0408 12:47:24.137999  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:24.138573  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:24.138607  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:24.138517  435057 retry.go:31] will retry after 1.238721936s: waiting for machine to come up
	I0408 12:47:25.378512  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:25.378929  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:25.378956  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:25.378819  435057 retry.go:31] will retry after 1.314708825s: waiting for machine to come up
	I0408 12:47:26.461241  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.461305  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.461321  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.482518  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.482566  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.483554  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.497035  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.497075  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:26.983270  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:26.996515  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:26.996556  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.483125  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.491506  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.491549  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:27.983839  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:27.991044  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:27.991090  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.483669  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.490665  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0408 12:47:28.490703  433557 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0408 12:47:28.983248  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:47:28.998278  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:47:29.007388  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:47:29.007429  433557 api_server.go:131] duration metric: took 7.024624495s to wait for apiserver health ...
	I0408 12:47:29.007444  433557 cni.go:84] Creating CNI manager for ""
	I0408 12:47:29.007452  433557 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:29.009506  433557 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:25.561601  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0408 12:47:26.087896  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:26.116559  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:26.145651  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/embed-certs-488947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:26.174910  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:26.206627  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:26.238398  433674 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:26.281684  433674 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:26.306417  433674 ssh_runner.go:195] Run: openssl version
	I0408 12:47:26.313279  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:26.328106  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333727  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.333810  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:26.340200  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:26.352316  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:26.364788  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.369928  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.370003  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:26.376525  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:26.388232  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:26.400301  433674 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405327  433674 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.405407  433674 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:26.411586  433674 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:26.423764  433674 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:26.428995  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:26.435932  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:26.442742  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:26.451458  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:26.458715  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:26.466424  433674 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:26.473948  433674 kubeadm.go:391] StartCluster: {Name:embed-certs-488947 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-488947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:26.474083  433674 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:26.474158  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.515603  433674 cri.go:89] found id: ""
	I0408 12:47:26.515676  433674 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:26.526818  433674 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:26.526845  433674 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:26.526851  433674 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:26.526908  433674 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:26.537675  433674 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:26.538807  433674 kubeconfig.go:125] found "embed-certs-488947" server: "https://192.168.72.159:8443"
	I0408 12:47:26.540848  433674 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:26.551278  433674 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.159
	I0408 12:47:26.551317  433674 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:26.551330  433674 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:26.551406  433674 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:26.591372  433674 cri.go:89] found id: ""
	I0408 12:47:26.591478  433674 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:26.610486  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:26.621770  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:26.621794  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:26.621869  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:26.632480  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:26.632554  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:26.645878  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:26.659969  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:26.660068  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:26.670611  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.680945  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:26.681034  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:26.692201  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:26.703049  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:26.703126  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:26.715887  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:26.727464  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:26.956245  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.722655  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:27.973294  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.086774  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:28.203640  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:28.203755  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:28.704550  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.203852  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.704305  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:29.724333  433674 api_server.go:72] duration metric: took 1.520681062s to wait for apiserver process to appear ...
	I0408 12:47:29.724372  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:47:29.724402  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:29.010843  433557 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:29.029631  433557 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:29.052609  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:29.069954  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:29.070010  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:29.070022  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:29.070034  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:29.070043  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:29.070049  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:47:29.070076  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:29.070087  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:29.070098  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:47:29.070107  433557 system_pods.go:74] duration metric: took 17.469317ms to wait for pod list to return data ...
	I0408 12:47:29.070117  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:29.075401  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:29.075443  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:29.075459  433557 node_conditions.go:105] duration metric: took 5.335891ms to run NodePressure ...
	I0408 12:47:29.075489  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:29.403218  433557 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409235  433557 kubeadm.go:733] kubelet initialised
	I0408 12:47:29.409263  433557 kubeadm.go:734] duration metric: took 6.014758ms waiting for restarted kubelet to initialise ...
	I0408 12:47:29.409276  433557 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:29.418787  433557 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.441264  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441310  433557 pod_ready.go:81] duration metric: took 22.478832ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.441325  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.441336  433557 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.461805  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461916  433557 pod_ready.go:81] duration metric: took 20.564997ms for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.461945  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "etcd-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.461982  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.475160  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475198  433557 pod_ready.go:81] duration metric: took 13.191566ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.475229  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-apiserver-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.475241  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.486266  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486306  433557 pod_ready.go:81] duration metric: took 11.046794ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.486321  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.486331  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:29.857658  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857703  433557 pod_ready.go:81] duration metric: took 371.357848ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:29.857717  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-proxy-tr6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:29.857725  433557 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.258154  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258194  433557 pod_ready.go:81] duration metric: took 400.459219ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.258208  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "kube-scheduler-no-preload-135234" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.258230  433557 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:30.656845  433557 pod_ready.go:97] node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656890  433557 pod_ready.go:81] duration metric: took 398.64565ms for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:47:30.656904  433557 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-135234" hosting pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:30.656915  433557 pod_ready.go:38] duration metric: took 1.247627349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:30.656947  433557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:47:30.683024  433557 ops.go:34] apiserver oom_adj: -16
	I0408 12:47:30.683055  433557 kubeadm.go:591] duration metric: took 12.247545723s to restartPrimaryControlPlane
	I0408 12:47:30.683067  433557 kubeadm.go:393] duration metric: took 12.302946s to StartCluster
	I0408 12:47:30.683095  433557 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.683214  433557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:30.685507  433557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:30.685852  433557 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:47:30.687967  433557 out.go:177] * Verifying Kubernetes components...
	I0408 12:47:30.685951  433557 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:47:30.686122  433557 config.go:182] Loaded profile config "no-preload-135234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0408 12:47:30.689462  433557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:30.689475  433557 addons.go:69] Setting storage-provisioner=true in profile "no-preload-135234"
	I0408 12:47:30.689511  433557 addons.go:234] Setting addon storage-provisioner=true in "no-preload-135234"
	W0408 12:47:30.689521  433557 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:47:30.689555  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.689573  433557 addons.go:69] Setting default-storageclass=true in profile "no-preload-135234"
	I0408 12:47:30.689620  433557 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-135234"
	I0408 12:47:30.689956  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.689995  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.689996  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690026  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.690085  433557 addons.go:69] Setting metrics-server=true in profile "no-preload-135234"
	I0408 12:47:30.690135  433557 addons.go:234] Setting addon metrics-server=true in "no-preload-135234"
	W0408 12:47:30.690146  433557 addons.go:243] addon metrics-server should already be in state true
	I0408 12:47:30.690186  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.690614  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.690692  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.710746  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0408 12:47:30.710947  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33695
	I0408 12:47:30.711153  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0408 12:47:30.711301  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711752  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.711839  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.712010  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712027  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712564  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.712757  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712780  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.712911  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.712926  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.713381  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.713427  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.713660  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714094  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.714304  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.714365  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.714401  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.717892  433557 addons.go:234] Setting addon default-storageclass=true in "no-preload-135234"
	W0408 12:47:30.717959  433557 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:47:30.718004  433557 host.go:66] Checking if "no-preload-135234" exists ...
	I0408 12:47:30.718497  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.718577  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.734825  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0408 12:47:30.736890  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I0408 12:47:30.756599  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.756681  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.757290  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757312  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.757318  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757332  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.757774  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.757849  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.758015  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.758082  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.760658  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.760732  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.762999  433557 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:47:30.764689  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:47:30.764714  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:47:30.766392  433557 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:30.764741  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.767890  433557 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:30.767911  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:47:30.767933  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.772580  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.772714  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773015  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773038  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773423  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.773449  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.773462  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773663  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.773875  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.773897  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.774038  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774074  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.774163  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.774227  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:30.779694  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0408 12:47:30.780190  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.780772  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.780793  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.781114  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.781773  433557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:30.781821  433557 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:30.803661  433557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0408 12:47:30.804212  433557 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:30.804828  433557 main.go:141] libmachine: Using API Version  1
	I0408 12:47:30.804847  433557 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:30.805397  433557 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:30.805713  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetState
	I0408 12:47:30.807761  433557 main.go:141] libmachine: (no-preload-135234) Calling .DriverName
	I0408 12:47:30.808244  433557 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:30.808269  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:47:30.808288  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHHostname
	I0408 12:47:30.811598  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812078  433557 main.go:141] libmachine: (no-preload-135234) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:80:06", ip: ""} in network mk-no-preload-135234: {Iface:virbr3 ExpiryTime:2024-04-08 13:46:50 +0000 UTC Type:0 Mac:52:54:00:9e:80:06 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:no-preload-135234 Clientid:01:52:54:00:9e:80:06}
	I0408 12:47:30.812109  433557 main.go:141] libmachine: (no-preload-135234) DBG | domain no-preload-135234 has defined IP address 192.168.61.48 and MAC address 52:54:00:9e:80:06 in network mk-no-preload-135234
	I0408 12:47:30.812264  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHPort
	I0408 12:47:30.812465  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHKeyPath
	I0408 12:47:30.812702  433557 main.go:141] libmachine: (no-preload-135234) Calling .GetSSHUsername
	I0408 12:47:30.812868  433557 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/no-preload-135234/id_rsa Username:docker}
	I0408 12:47:26.695466  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:26.835234  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:26.835265  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:26.695884  435057 retry.go:31] will retry after 1.93787314s: waiting for machine to come up
	I0408 12:47:28.635479  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:28.636019  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:28.636052  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:28.635935  435057 retry.go:31] will retry after 1.906126524s: waiting for machine to come up
	I0408 12:47:30.544699  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:30.545145  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:30.545165  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:30.545084  435057 retry.go:31] will retry after 3.291404288s: waiting for machine to come up
	I0408 12:47:30.979880  433557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:31.004961  433557 node_ready.go:35] waiting up to 6m0s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:31.088114  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:47:31.110971  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:47:31.111017  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:47:31.150193  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:47:31.150229  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:47:31.184811  433557 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.184899  433557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:47:31.214364  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:47:31.244802  433557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:47:32.406228  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.318067686s)
	I0408 12:47:32.406305  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406317  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.406830  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.406897  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.406913  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.406921  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.407242  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407275  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.407319  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.407329  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.532524  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.318098791s)
	I0408 12:47:32.532662  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532694  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.532576  433557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.287674494s)
	I0408 12:47:32.532774  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.532799  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533022  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533041  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533052  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533060  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533223  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533280  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533286  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.533294  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.533301  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.533457  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533516  433557 main.go:141] libmachine: (no-preload-135234) DBG | Closing plugin on server side
	I0408 12:47:32.533539  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.533546  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.534974  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.534991  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.535019  433557 addons.go:470] Verifying addon metrics-server=true in "no-preload-135234"
	I0408 12:47:32.543151  433557 main.go:141] libmachine: Making call to close driver server
	I0408 12:47:32.543183  433557 main.go:141] libmachine: (no-preload-135234) Calling .Close
	I0408 12:47:32.543549  433557 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:47:32.543571  433557 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:47:32.546033  433557 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0408 12:47:32.894282  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:47:32.894320  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:47:32.894336  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:32.988397  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:32.988442  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.224790  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.232146  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.232176  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:33.724683  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:33.729479  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:47:33.729520  433674 api_server.go:103] status: https://192.168.72.159:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:47:34.224919  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:47:34.230233  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:47:34.247835  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:47:34.247872  433674 api_server.go:131] duration metric: took 4.523492127s to wait for apiserver health ...
	I0408 12:47:34.247883  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:47:34.247890  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:34.249807  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:47:34.251603  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:47:34.265254  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:47:34.288078  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:47:34.301533  433674 system_pods.go:59] 8 kube-system pods found
	I0408 12:47:34.301570  433674 system_pods.go:61] "coredns-76f75df574-hq2mm" [cfc7bd40-0b7d-4e00-ac55-b3ae796018ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:47:34.301577  433674 system_pods.go:61] "etcd-embed-certs-488947" [eb29ace5-8ad9-4080-a875-2eb83dcea583] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:47:34.301585  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [8e97033f-996a-4b64-9474-7b4d562eb1c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:47:34.301591  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [b3db7631-d953-418e-9c72-f299d0287a2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:47:34.301595  433674 system_pods.go:61] "kube-proxy-2gn8m" [c31d8f0d-d6c1-4afa-b64c-7fc422d493f2] Running
	I0408 12:47:34.301600  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b9b29f85-7a75-4b09-b6cd-940ff42326d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:47:34.301604  433674 system_pods.go:61] "metrics-server-57f55c9bc5-z2ztl" [d9dc47ad-3370-4e55-a724-8c529c723992] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:47:34.301607  433674 system_pods.go:61] "storage-provisioner" [4953dc3a-31ca-464d-9530-34f488ed9a02] Running
	I0408 12:47:34.301617  433674 system_pods.go:74] duration metric: took 13.514139ms to wait for pod list to return data ...
	I0408 12:47:34.301624  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:47:34.305931  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:47:34.305962  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:47:34.305974  433674 node_conditions.go:105] duration metric: took 4.345624ms to run NodePressure ...
	I0408 12:47:34.305993  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:34.598392  433674 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603606  433674 kubeadm.go:733] kubelet initialised
	I0408 12:47:34.603632  433674 kubeadm.go:734] duration metric: took 5.204237ms waiting for restarted kubelet to initialise ...
	I0408 12:47:34.603641  433674 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:34.610027  433674 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:32.547718  433557 addons.go:505] duration metric: took 1.861769291s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0408 12:47:33.008857  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:35.510251  433557 node_ready.go:53] node "no-preload-135234" has status "Ready":"False"
	I0408 12:47:33.837729  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:33.838183  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | unable to find current IP address of domain old-k8s-version-384148 in network mk-old-k8s-version-384148
	I0408 12:47:33.838213  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | I0408 12:47:33.838133  435057 retry.go:31] will retry after 3.949072436s: waiting for machine to come up
	I0408 12:47:39.502172  433439 start.go:364] duration metric: took 55.254308447s to acquireMachinesLock for "default-k8s-diff-port-527454"
	I0408 12:47:39.502232  433439 start.go:96] Skipping create...Using existing machine configuration
	I0408 12:47:39.502245  433439 fix.go:54] fixHost starting: 
	I0408 12:47:39.502725  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:47:39.502767  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:47:39.523738  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0408 12:47:39.525022  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:47:39.525614  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:47:39.525646  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:47:39.526077  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:47:39.526307  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:47:39.526448  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:47:39.528207  433439 fix.go:112] recreateIfNeeded on default-k8s-diff-port-527454: state=Stopped err=<nil>
	I0408 12:47:39.528241  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	W0408 12:47:39.528449  433439 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 12:47:39.530360  433439 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-527454" ...
	I0408 12:47:36.618430  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.619713  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:38.009213  433557 node_ready.go:49] node "no-preload-135234" has status "Ready":"True"
	I0408 12:47:38.009241  433557 node_ready.go:38] duration metric: took 7.004239102s for node "no-preload-135234" to be "Ready" ...
	I0408 12:47:38.009250  433557 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:47:38.014665  433557 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020024  433557 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:38.020054  433557 pod_ready.go:81] duration metric: took 5.358174ms for pod "coredns-7db6d8ff4d-ndz4x" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:38.020067  433557 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:40.030803  433557 pod_ready.go:102] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:37.789177  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789704  433881 main.go:141] libmachine: (old-k8s-version-384148) Found IP for machine: 192.168.39.245
	I0408 12:47:37.789740  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has current primary IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.789750  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserving static IP address...
	I0408 12:47:37.790172  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.790212  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | skip adding static IP to network mk-old-k8s-version-384148 - found existing host DHCP lease matching {name: "old-k8s-version-384148", mac: "52:54:00:06:da:95", ip: "192.168.39.245"}
	I0408 12:47:37.790227  433881 main.go:141] libmachine: (old-k8s-version-384148) Reserved static IP address: 192.168.39.245
	I0408 12:47:37.790244  433881 main.go:141] libmachine: (old-k8s-version-384148) Waiting for SSH to be available...
	I0408 12:47:37.790259  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Getting to WaitForSSH function...
	I0408 12:47:37.792465  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792793  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.792829  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.792884  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH client type: external
	I0408 12:47:37.792932  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa (-rw-------)
	I0408 12:47:37.792974  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:47:37.793007  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | About to run SSH command:
	I0408 12:47:37.793018  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | exit 0
	I0408 12:47:37.920427  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | SSH cmd err, output: <nil>: 
	I0408 12:47:37.920854  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetConfigRaw
	I0408 12:47:37.921644  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:37.924168  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924631  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.924663  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.924954  433881 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/config.json ...
	I0408 12:47:37.925170  433881 machine.go:94] provisionDockerMachine start ...
	I0408 12:47:37.925191  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:37.925526  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:37.928176  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928552  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:37.928583  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:37.928740  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:37.928916  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929095  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:37.929260  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:37.929421  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:37.929626  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:37.929637  433881 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:47:38.044349  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:47:38.044378  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044695  433881 buildroot.go:166] provisioning hostname "old-k8s-version-384148"
	I0408 12:47:38.044728  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.044955  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.047788  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048116  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.048149  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.048291  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.048487  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.048842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.049024  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.049242  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.049258  433881 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-384148 && echo "old-k8s-version-384148" | sudo tee /etc/hostname
	I0408 12:47:38.175102  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-384148
	
	I0408 12:47:38.175132  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.178015  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178431  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.178461  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.178659  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.178872  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179057  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.179198  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.179347  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.179578  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.179604  433881 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-384148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-384148/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-384148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:47:38.306997  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:47:38.307037  433881 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:47:38.307072  433881 buildroot.go:174] setting up certificates
	I0408 12:47:38.307088  433881 provision.go:84] configureAuth start
	I0408 12:47:38.307099  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetMachineName
	I0408 12:47:38.307464  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:38.310078  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310595  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.310643  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.310683  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.313155  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313521  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.313551  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.313694  433881 provision.go:143] copyHostCerts
	I0408 12:47:38.313748  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:47:38.313768  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:47:38.313829  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:47:38.313919  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:47:38.313927  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:47:38.313945  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:47:38.314007  433881 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:47:38.314014  433881 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:47:38.314031  433881 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:47:38.314080  433881 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-384148 san=[127.0.0.1 192.168.39.245 localhost minikube old-k8s-version-384148]
	I0408 12:47:38.748791  433881 provision.go:177] copyRemoteCerts
	I0408 12:47:38.748865  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:47:38.748895  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.752034  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752458  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.752499  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.752695  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.752900  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.753075  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.753266  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:38.849144  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0408 12:47:38.880279  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:47:38.907293  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:47:38.936116  433881 provision.go:87] duration metric: took 629.014723ms to configureAuth
	I0408 12:47:38.936152  433881 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:47:38.936321  433881 config.go:182] Loaded profile config "old-k8s-version-384148": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:47:38.936403  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:38.939013  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939399  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:38.939457  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:38.939593  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:38.939861  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940059  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:38.940215  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:38.940377  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:38.940622  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:38.940648  433881 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:47:39.241516  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:47:39.241543  433881 machine.go:97] duration metric: took 1.316359736s to provisionDockerMachine
	I0408 12:47:39.241554  433881 start.go:293] postStartSetup for "old-k8s-version-384148" (driver="kvm2")
	I0408 12:47:39.241566  433881 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:47:39.241585  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.241901  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:47:39.241935  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.244908  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245307  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.245336  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.245486  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.245692  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.245890  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.246051  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.333612  433881 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:47:39.338826  433881 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:47:39.338853  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:47:39.338919  433881 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:47:39.338988  433881 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:47:39.339071  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:47:39.352064  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:39.380881  433881 start.go:296] duration metric: took 139.30723ms for postStartSetup
	I0408 12:47:39.380939  433881 fix.go:56] duration metric: took 20.599293118s for fixHost
	I0408 12:47:39.380970  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.384147  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384556  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.384610  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.384795  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.385010  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385212  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.385411  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.385627  433881 main.go:141] libmachine: Using SSH client type: native
	I0408 12:47:39.385869  433881 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0408 12:47:39.385885  433881 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:47:39.501982  433881 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580459.470646239
	
	I0408 12:47:39.502031  433881 fix.go:216] guest clock: 1712580459.470646239
	I0408 12:47:39.502042  433881 fix.go:229] Guest: 2024-04-08 12:47:39.470646239 +0000 UTC Remote: 2024-04-08 12:47:39.38094595 +0000 UTC m=+307.818603739 (delta=89.700289ms)
	I0408 12:47:39.502073  433881 fix.go:200] guest clock delta is within tolerance: 89.700289ms
	I0408 12:47:39.502084  433881 start.go:83] releasing machines lock for "old-k8s-version-384148", held for 20.720472846s
	I0408 12:47:39.502114  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.502407  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:39.505864  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506319  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.506352  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.506704  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507318  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507574  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .DriverName
	I0408 12:47:39.507677  433881 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:47:39.507767  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.507908  433881 ssh_runner.go:195] Run: cat /version.json
	I0408 12:47:39.507932  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHHostname
	I0408 12:47:39.510993  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511077  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511476  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511522  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:39.511563  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511589  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:39.511743  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511842  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHPort
	I0408 12:47:39.511923  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512084  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHKeyPath
	I0408 12:47:39.512093  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512239  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetSSHUsername
	I0408 12:47:39.512246  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.512413  433881 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/old-k8s-version-384148/id_rsa Username:docker}
	I0408 12:47:39.633304  433881 ssh_runner.go:195] Run: systemctl --version
	I0408 12:47:39.642014  433881 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:47:39.804068  433881 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:47:39.812237  433881 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:47:39.812324  433881 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:47:39.835586  433881 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:47:39.835621  433881 start.go:494] detecting cgroup driver to use...
	I0408 12:47:39.835721  433881 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:47:39.860378  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:47:39.882019  433881 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:47:39.882096  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:47:39.898112  433881 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:47:39.913562  433881 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:47:40.047449  433881 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:47:40.188730  433881 docker.go:233] disabling docker service ...
	I0408 12:47:40.188822  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:47:40.205050  433881 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:47:40.222432  433881 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:47:40.386332  433881 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:47:40.561583  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:47:40.582135  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:47:40.611648  433881 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0408 12:47:40.611751  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.629357  433881 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:47:40.629458  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.646030  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.661349  433881 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:47:40.674997  433881 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:47:40.688255  433881 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:47:40.706703  433881 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:47:40.706763  433881 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:47:40.724839  433881 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:47:40.738018  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:40.906300  433881 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:47:41.073054  433881 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:47:41.073141  433881 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:47:41.078610  433881 start.go:562] Will wait 60s for crictl version
	I0408 12:47:41.078679  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:41.083133  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:47:41.126948  433881 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:47:41.127101  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.160091  433881 ssh_runner.go:195] Run: crio --version
	I0408 12:47:41.195044  433881 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0408 12:47:41.196514  433881 main.go:141] libmachine: (old-k8s-version-384148) Calling .GetIP
	I0408 12:47:41.199376  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.199831  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:da:95", ip: ""} in network mk-old-k8s-version-384148: {Iface:virbr4 ExpiryTime:2024-04-08 13:47:31 +0000 UTC Type:0 Mac:52:54:00:06:da:95 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:old-k8s-version-384148 Clientid:01:52:54:00:06:da:95}
	I0408 12:47:41.199860  433881 main.go:141] libmachine: (old-k8s-version-384148) DBG | domain old-k8s-version-384148 has defined IP address 192.168.39.245 and MAC address 52:54:00:06:da:95 in network mk-old-k8s-version-384148
	I0408 12:47:41.200145  433881 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0408 12:47:41.204867  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:41.221274  433881 kubeadm.go:877] updating cluster {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:47:41.221469  433881 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 12:47:41.221550  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:41.275430  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:41.275531  433881 ssh_runner.go:195] Run: which lz4
	I0408 12:47:41.280606  433881 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:47:41.285549  433881 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:47:41.285606  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0408 12:47:39.531815  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Start
	I0408 12:47:39.531988  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring networks are active...
	I0408 12:47:39.532969  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network default is active
	I0408 12:47:39.533486  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Ensuring network mk-default-k8s-diff-port-527454 is active
	I0408 12:47:39.533947  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Getting domain xml...
	I0408 12:47:39.534767  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Creating domain...
	I0408 12:47:40.935150  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting to get IP...
	I0408 12:47:40.936250  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:40.936898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:40.936778  435248 retry.go:31] will retry after 215.442539ms: waiting for machine to come up
	I0408 12:47:41.154393  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.154940  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.154852  435248 retry.go:31] will retry after 274.982374ms: waiting for machine to come up
	I0408 12:47:41.431442  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.431990  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.432023  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.431933  435248 retry.go:31] will retry after 335.077282ms: waiting for machine to come up
	I0408 12:47:40.620537  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:42.622241  433674 pod_ready.go:102] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:44.118493  433674 pod_ready.go:92] pod "coredns-76f75df574-hq2mm" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.118532  433674 pod_ready.go:81] duration metric: took 9.508474788s for pod "coredns-76f75df574-hq2mm" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.118545  433674 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626843  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.626869  433674 pod_ready.go:81] duration metric: took 508.318376ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.626882  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633488  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:44.633521  433674 pod_ready.go:81] duration metric: took 6.630145ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:44.633535  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027744  433557 pod_ready.go:92] pod "etcd-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.027771  433557 pod_ready.go:81] duration metric: took 3.007695895s for pod "etcd-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.027782  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034038  433557 pod_ready.go:92] pod "kube-apiserver-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.034076  433557 pod_ready.go:81] duration metric: took 6.28617ms for pod "kube-apiserver-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.034090  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039232  433557 pod_ready.go:92] pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.039262  433557 pod_ready.go:81] duration metric: took 5.161613ms for pod "kube-controller-manager-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.039277  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045793  433557 pod_ready.go:92] pod "kube-proxy-tr6td" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.045887  433557 pod_ready.go:81] duration metric: took 6.600896ms for pod "kube-proxy-tr6td" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.045908  433557 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.209976  433557 pod_ready.go:92] pod "kube-scheduler-no-preload-135234" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:41.210003  433557 pod_ready.go:81] duration metric: took 164.085848ms for pod "kube-scheduler-no-preload-135234" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:41.210018  433557 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:43.220338  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:45.718170  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:43.224219  433881 crio.go:462] duration metric: took 1.943671791s to copy over tarball
	I0408 12:47:43.224306  433881 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:47:41.768734  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:41.769194  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:41.769131  435248 retry.go:31] will retry after 581.590127ms: waiting for machine to come up
	I0408 12:47:42.352156  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.352975  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:42.353017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:42.352850  435248 retry.go:31] will retry after 673.545679ms: waiting for machine to come up
	I0408 12:47:43.028329  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029066  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.029101  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.028956  435248 retry.go:31] will retry after 690.795418ms: waiting for machine to come up
	I0408 12:47:43.721435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.721999  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:43.722025  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:43.721948  435248 retry.go:31] will retry after 941.917321ms: waiting for machine to come up
	I0408 12:47:44.665002  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665468  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:44.665495  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:44.665406  435248 retry.go:31] will retry after 1.037587737s: waiting for machine to come up
	I0408 12:47:45.705319  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705792  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:45.705822  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:45.705730  435248 retry.go:31] will retry after 1.287151334s: waiting for machine to come up
	I0408 12:47:46.640995  433674 pod_ready.go:102] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:48.558627  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.558666  433674 pod_ready.go:81] duration metric: took 3.925119514s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.558683  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583378  433674 pod_ready.go:92] pod "kube-proxy-2gn8m" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.583405  433674 pod_ready.go:81] duration metric: took 24.715384ms for pod "kube-proxy-2gn8m" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.583416  433674 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598937  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:47:48.598969  433674 pod_ready.go:81] duration metric: took 15.544342ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:48.598983  433674 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	I0408 12:47:47.918307  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:50.219513  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:46.621677  433881 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.397321627s)
	I0408 12:47:46.881725  433881 crio.go:469] duration metric: took 3.657463869s to extract the tarball
	I0408 12:47:46.881748  433881 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:47:46.936087  433881 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:47:46.980999  433881 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0408 12:47:46.981031  433881 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0408 12:47:46.981086  433881 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.981115  433881 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.981160  433881 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:46.981180  433881 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.981197  433881 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.981206  433881 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.981332  433881 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.981525  433881 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:46.983461  433881 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0408 12:47:46.983449  433881 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:46.983481  433881 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:46.983437  433881 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:46.983501  433881 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:46.983517  433881 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:46.983495  433881 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.215815  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.218682  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.218812  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0408 12:47:47.226057  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.237986  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.249572  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.255059  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.331367  433881 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0408 12:47:47.331429  433881 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.331484  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.403757  433881 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0408 12:47:47.403846  433881 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.403899  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.408643  433881 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0408 12:47:47.408702  433881 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0408 12:47:47.408755  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443551  433881 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0408 12:47:47.443589  433881 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0408 12:47:47.443609  433881 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.443626  433881 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.443678  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.443682  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453637  433881 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0408 12:47:47.453695  433881 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.453749  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453825  433881 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0408 12:47:47.453864  433881 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.453884  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0408 12:47:47.453908  433881 ssh_runner.go:195] Run: which crictl
	I0408 12:47:47.453990  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0408 12:47:47.454014  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0408 12:47:47.456910  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0408 12:47:47.457446  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0408 12:47:47.569243  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0408 12:47:47.569295  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0408 12:47:47.569320  433881 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0408 12:47:47.583668  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0408 12:47:47.583967  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0408 12:47:47.589545  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0408 12:47:47.589707  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0408 12:47:47.638036  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0408 12:47:47.639955  433881 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0408 12:47:47.860567  433881 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:47:48.010273  433881 cache_images.go:92] duration metric: took 1.029223281s to LoadCachedImages
	W0408 12:47:48.010419  433881 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18588-368424/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0408 12:47:48.010440  433881 kubeadm.go:928] updating node { 192.168.39.245 8443 v1.20.0 crio true true} ...
	I0408 12:47:48.010631  433881 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-384148 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:47:48.010729  433881 ssh_runner.go:195] Run: crio config
	I0408 12:47:48.065431  433881 cni.go:84] Creating CNI manager for ""
	I0408 12:47:48.065461  433881 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:47:48.065478  433881 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:47:48.065504  433881 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.245 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-384148 NodeName:old-k8s-version-384148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0408 12:47:48.065684  433881 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-384148"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:47:48.065779  433881 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0408 12:47:48.080840  433881 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:47:48.080950  433881 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:47:48.094581  433881 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0408 12:47:48.117392  433881 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:47:48.138262  433881 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0408 12:47:48.165039  433881 ssh_runner.go:195] Run: grep 192.168.39.245	control-plane.minikube.internal$ /etc/hosts
	I0408 12:47:48.171191  433881 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:47:48.189417  433881 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:47:48.341553  433881 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:47:48.363215  433881 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148 for IP: 192.168.39.245
	I0408 12:47:48.363249  433881 certs.go:194] generating shared ca certs ...
	I0408 12:47:48.363272  433881 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:48.363473  433881 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:47:48.363571  433881 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:47:48.363589  433881 certs.go:256] generating profile certs ...
	I0408 12:47:48.426881  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/client.key
	I0408 12:47:48.427040  433881 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key.b153d6a1
	I0408 12:47:48.427110  433881 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key
	I0408 12:47:48.427261  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:47:48.427310  433881 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:47:48.427321  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:47:48.427354  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:47:48.427422  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:47:48.427462  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:47:48.427523  433881 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:47:48.428524  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:47:48.476520  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:47:48.522452  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:47:48.561710  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:47:48.607052  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0408 12:47:48.651541  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:47:48.704207  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:47:48.742684  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/old-k8s-version-384148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0408 12:47:48.772703  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:47:48.803476  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:47:48.833154  433881 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:47:48.863183  433881 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:47:48.885940  433881 ssh_runner.go:195] Run: openssl version
	I0408 12:47:48.894847  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:47:48.910969  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916386  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.916449  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:47:48.923008  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:47:48.936122  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:47:48.952344  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957735  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.957815  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:47:48.964720  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:47:48.978862  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:47:48.993113  433881 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998835  433881 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:47:48.998906  433881 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:47:49.005710  433881 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:47:49.019197  433881 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:47:49.024728  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:47:49.031831  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:47:49.038736  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:47:49.045946  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:47:49.053040  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:47:49.060064  433881 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:47:49.066969  433881 kubeadm.go:391] StartCluster: {Name:old-k8s-version-384148 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-384148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.245 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:47:49.067090  433881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:47:49.067156  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.107266  433881 cri.go:89] found id: ""
	I0408 12:47:49.107336  433881 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:47:49.120092  433881 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:47:49.120126  433881 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:47:49.120132  433881 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:47:49.120190  433881 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:47:49.133500  433881 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:47:49.134686  433881 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-384148" does not appear in /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:47:49.135619  433881 kubeconfig.go:62] /home/jenkins/minikube-integration/18588-368424/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-384148" cluster setting kubeconfig missing "old-k8s-version-384148" context setting]
	I0408 12:47:49.136897  433881 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:47:49.139048  433881 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:47:49.154878  433881 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.245
	I0408 12:47:49.154925  433881 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:47:49.154941  433881 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:47:49.155009  433881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:47:49.207364  433881 cri.go:89] found id: ""
	I0408 12:47:49.207445  433881 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:47:49.228390  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:47:49.245160  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:47:49.245193  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:47:49.245266  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:47:49.256832  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:47:49.256913  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:47:49.268773  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:47:49.282821  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:47:49.282898  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:47:49.297896  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.312075  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:47:49.312158  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:47:49.327398  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:47:49.341467  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:47:49.341604  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:47:49.354096  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:47:49.366717  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:49.514951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.442724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.716276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.833506  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:47:50.927655  433881 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:47:50.927798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:51.428588  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:46.994162  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994640  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:46.994672  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:46.994593  435248 retry.go:31] will retry after 1.863771905s: waiting for machine to come up
	I0408 12:47:48.860673  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:48.861257  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:48.861151  435248 retry.go:31] will retry after 2.204894376s: waiting for machine to come up
	I0408 12:47:51.067423  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067909  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:51.067937  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:51.067864  435248 retry.go:31] will retry after 2.625423179s: waiting for machine to come up
	I0408 12:47:50.608007  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:53.108084  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:52.717545  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:55.218944  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:51.928035  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.427844  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:52.928718  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.927869  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.428707  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:54.928798  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.427884  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:55.928273  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:56.427941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:53.695295  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695826  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:53.695862  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:53.695772  435248 retry.go:31] will retry after 4.111917473s: waiting for machine to come up
	I0408 12:47:55.606909  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:58.111708  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:57.717559  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:59.718066  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:47:56.927927  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.428068  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.928800  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.427871  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:58.927822  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.428740  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:59.927924  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.427948  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:00.928792  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:01.428657  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:47:57.809179  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809697  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | unable to find current IP address of domain default-k8s-diff-port-527454 in network mk-default-k8s-diff-port-527454
	I0408 12:47:57.809729  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | I0408 12:47:57.809632  435248 retry.go:31] will retry after 4.27502806s: waiting for machine to come up
	I0408 12:48:02.086033  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086558  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has current primary IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.086586  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Found IP for machine: 192.168.50.7
	I0408 12:48:02.086603  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserving static IP address...
	I0408 12:48:02.087069  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.087105  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Reserved static IP address: 192.168.50.7
	I0408 12:48:02.087137  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | skip adding static IP to network mk-default-k8s-diff-port-527454 - found existing host DHCP lease matching {name: "default-k8s-diff-port-527454", mac: "52:54:00:43:ff:4b", ip: "192.168.50.7"}
	I0408 12:48:02.087158  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Getting to WaitForSSH function...
	I0408 12:48:02.087177  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Waiting for SSH to be available...
	I0408 12:48:02.089228  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089585  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.089608  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.089799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH client type: external
	I0408 12:48:02.089840  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Using SSH private key: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa (-rw-------)
	I0408 12:48:02.089885  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0408 12:48:02.089900  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | About to run SSH command:
	I0408 12:48:02.089917  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | exit 0
	I0408 12:48:02.216245  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | SSH cmd err, output: <nil>: 
	I0408 12:48:02.216684  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetConfigRaw
	I0408 12:48:02.217582  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.220543  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.220961  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.220995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.221282  433439 profile.go:143] Saving config to /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/config.json ...
	I0408 12:48:02.221480  433439 machine.go:94] provisionDockerMachine start ...
	I0408 12:48:02.221499  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:02.221738  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.224371  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.224770  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.224802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.225007  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.225236  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225399  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.225548  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.225740  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.225957  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.225970  433439 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 12:48:02.336716  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 12:48:02.336754  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337074  433439 buildroot.go:166] provisioning hostname "default-k8s-diff-port-527454"
	I0408 12:48:02.337108  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.337351  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.340133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340539  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.340583  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.340653  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.340842  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341016  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.341171  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.341346  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.341539  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.341556  433439 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-527454 && echo "default-k8s-diff-port-527454" | sudo tee /etc/hostname
	I0408 12:48:02.464462  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-527454
	
	I0408 12:48:02.464507  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.467682  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468082  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.468118  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.468335  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.468595  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468782  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.468954  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.469154  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:02.469372  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:02.469392  433439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-527454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-527454/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-527454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 12:48:02.593971  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 12:48:02.594006  433439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18588-368424/.minikube CaCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18588-368424/.minikube}
	I0408 12:48:02.594061  433439 buildroot.go:174] setting up certificates
	I0408 12:48:02.594078  433439 provision.go:84] configureAuth start
	I0408 12:48:02.594092  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetMachineName
	I0408 12:48:02.594431  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:02.597587  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.598043  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.598206  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.600898  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601267  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.601299  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.601497  433439 provision.go:143] copyHostCerts
	I0408 12:48:02.601562  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem, removing ...
	I0408 12:48:02.601588  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem
	I0408 12:48:02.601653  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/ca.pem (1082 bytes)
	I0408 12:48:02.601841  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem, removing ...
	I0408 12:48:02.601857  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem
	I0408 12:48:02.601888  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/cert.pem (1123 bytes)
	I0408 12:48:02.601966  433439 exec_runner.go:144] found /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem, removing ...
	I0408 12:48:02.601981  433439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem
	I0408 12:48:02.602010  433439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18588-368424/.minikube/key.pem (1675 bytes)
	I0408 12:48:02.602088  433439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-527454 san=[127.0.0.1 192.168.50.7 default-k8s-diff-port-527454 localhost minikube]
	I0408 12:48:02.845116  433439 provision.go:177] copyRemoteCerts
	I0408 12:48:02.845190  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 12:48:02.845217  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:02.848054  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:02.848406  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:02.848559  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:02.848817  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:02.848986  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:02.849125  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:02.934223  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0408 12:48:02.962726  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0408 12:48:02.992767  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 12:48:03.021973  433439 provision.go:87] duration metric: took 427.87874ms to configureAuth
	I0408 12:48:03.022009  433439 buildroot.go:189] setting minikube options for container-runtime
	I0408 12:48:03.022270  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:48:03.022382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.025407  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025765  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.025802  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.025959  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.026215  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026379  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.026510  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.026659  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.026834  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.026856  433439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0408 12:48:03.310263  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0408 12:48:03.310307  433439 machine.go:97] duration metric: took 1.088813603s to provisionDockerMachine
	I0408 12:48:03.310323  433439 start.go:293] postStartSetup for "default-k8s-diff-port-527454" (driver="kvm2")
	I0408 12:48:03.310337  433439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 12:48:03.310362  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.310758  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 12:48:03.310799  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.313533  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.313968  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.314001  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.314201  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.314375  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.314584  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.314760  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.400087  433439 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 12:48:03.405240  433439 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 12:48:03.405272  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/addons for local assets ...
	I0408 12:48:03.405351  433439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18588-368424/.minikube/files for local assets ...
	I0408 12:48:03.405450  433439 filesync.go:149] local asset: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem -> 3758172.pem in /etc/ssl/certs
	I0408 12:48:03.405570  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 12:48:03.415947  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:03.448935  433439 start.go:296] duration metric: took 138.593583ms for postStartSetup
	I0408 12:48:03.449025  433439 fix.go:56] duration metric: took 23.946779964s for fixHost
	I0408 12:48:03.449055  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.452026  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452392  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.452435  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.452630  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.452844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453063  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.453248  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.453420  433439 main.go:141] libmachine: Using SSH client type: native
	I0408 12:48:03.453604  433439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0408 12:48:03.453615  433439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0408 12:48:03.565710  433439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712580483.551031252
	
	I0408 12:48:03.565738  433439 fix.go:216] guest clock: 1712580483.551031252
	I0408 12:48:03.565750  433439 fix.go:229] Guest: 2024-04-08 12:48:03.551031252 +0000 UTC Remote: 2024-04-08 12:48:03.44903588 +0000 UTC m=+361.760256784 (delta=101.995372ms)
	I0408 12:48:03.565777  433439 fix.go:200] guest clock delta is within tolerance: 101.995372ms
	I0408 12:48:03.565787  433439 start.go:83] releasing machines lock for "default-k8s-diff-port-527454", held for 24.063582343s
	I0408 12:48:03.565806  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.566106  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:03.569409  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.569776  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.569814  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.570017  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570577  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570831  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:48:03.570952  433439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0408 12:48:03.571021  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.571121  433439 ssh_runner.go:195] Run: cat /version.json
	I0408 12:48:03.571146  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:48:03.573939  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574167  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574300  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574333  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574469  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574594  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:03.574621  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:03.574674  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.574757  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:48:03.574871  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.574957  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:48:03.575130  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.575441  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:48:03.575590  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:48:03.695930  433439 ssh_runner.go:195] Run: systemctl --version
	I0408 12:48:03.702915  433439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0408 12:48:03.853737  433439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 12:48:03.860218  433439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 12:48:03.860287  433439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 12:48:03.877827  433439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 12:48:03.877861  433439 start.go:494] detecting cgroup driver to use...
	I0408 12:48:03.877943  433439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 12:48:03.897232  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 12:48:03.913028  433439 docker.go:217] disabling cri-docker service (if available) ...
	I0408 12:48:03.913112  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0408 12:48:03.929574  433439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0408 12:48:03.946880  433439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0408 12:48:04.083524  433439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0408 12:48:04.243842  433439 docker.go:233] disabling docker service ...
	I0408 12:48:04.243938  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0408 12:48:04.260459  433439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0408 12:48:04.276119  433439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0408 12:48:04.428999  433439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0408 12:48:04.571431  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0408 12:48:04.589661  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 12:48:04.612872  433439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0408 12:48:04.612954  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.625841  433439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0408 12:48:04.625939  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.638868  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.652106  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.664883  433439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 12:48:04.678149  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.691069  433439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.711329  433439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0408 12:48:04.725917  433439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 12:48:04.738875  433439 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 12:48:04.738941  433439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 12:48:04.756784  433439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 12:48:04.769852  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:04.895658  433439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0408 12:48:05.056165  433439 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0408 12:48:05.056270  433439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0408 12:48:05.061838  433439 start.go:562] Will wait 60s for crictl version
	I0408 12:48:05.061918  433439 ssh_runner.go:195] Run: which crictl
	I0408 12:48:05.066280  433439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 12:48:05.110966  433439 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0408 12:48:05.111084  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.142272  433439 ssh_runner.go:195] Run: crio --version
	I0408 12:48:05.176138  433439 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0408 12:48:00.606508  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:03.107018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:05.109926  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:02.220836  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:04.718465  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:01.928628  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.427857  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:02.927917  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.428824  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:03.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.428084  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:04.928751  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.428193  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.927854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.427836  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:05.177382  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetIP
	I0408 12:48:05.180028  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180334  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:48:05.180363  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:48:05.180635  433439 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0408 12:48:05.185436  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:05.199001  433439 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 12:48:05.199130  433439 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 12:48:05.199174  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:05.239255  433439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0408 12:48:05.239358  433439 ssh_runner.go:195] Run: which lz4
	I0408 12:48:05.244115  433439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0408 12:48:05.249135  433439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 12:48:05.249169  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0408 12:48:07.606284  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.607161  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.720025  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:09.219059  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:06.928222  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.427868  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:07.927863  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.428510  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:08.928662  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.427932  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:09.928613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.427890  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:10.928934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.428085  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:06.889921  433439 crio.go:462] duration metric: took 1.645848876s to copy over tarball
	I0408 12:48:06.890006  433439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 12:48:09.403589  433439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.513555281s)
	I0408 12:48:09.403620  433439 crio.go:469] duration metric: took 2.513669951s to extract the tarball
	I0408 12:48:09.403627  433439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 12:48:09.446487  433439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0408 12:48:09.494576  433439 crio.go:514] all images are preloaded for cri-o runtime.
	I0408 12:48:09.494606  433439 cache_images.go:84] Images are preloaded, skipping loading
	I0408 12:48:09.494614  433439 kubeadm.go:928] updating node { 192.168.50.7 8444 v1.29.3 crio true true} ...
	I0408 12:48:09.494822  433439 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-527454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 12:48:09.494917  433439 ssh_runner.go:195] Run: crio config
	I0408 12:48:09.541809  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:09.541839  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:09.541859  433439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 12:48:09.541887  433439 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-527454 NodeName:default-k8s-diff-port-527454 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 12:48:09.542105  433439 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-527454"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 12:48:09.542201  433439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0408 12:48:09.553494  433439 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 12:48:09.553591  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0408 12:48:09.564970  433439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0408 12:48:09.584888  433439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 12:48:09.604538  433439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0408 12:48:09.623993  433439 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0408 12:48:09.628368  433439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 12:48:09.642170  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:48:09.789791  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:48:09.808943  433439 certs.go:68] Setting up /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454 for IP: 192.168.50.7
	I0408 12:48:09.808972  433439 certs.go:194] generating shared ca certs ...
	I0408 12:48:09.808995  433439 certs.go:226] acquiring lock for ca certs: {Name:mk437f743d6bdec5f37197e2e8bf0fd46f91f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:48:09.809194  433439 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key
	I0408 12:48:09.809242  433439 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key
	I0408 12:48:09.809253  433439 certs.go:256] generating profile certs ...
	I0408 12:48:09.809344  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/client.key
	I0408 12:48:09.809415  433439 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key.ad1d04eb
	I0408 12:48:09.809457  433439 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key
	I0408 12:48:09.809645  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem (1338 bytes)
	W0408 12:48:09.809699  433439 certs.go:480] ignoring /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817_empty.pem, impossibly tiny 0 bytes
	I0408 12:48:09.809713  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca-key.pem (1679 bytes)
	I0408 12:48:09.809742  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/ca.pem (1082 bytes)
	I0408 12:48:09.809764  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/cert.pem (1123 bytes)
	I0408 12:48:09.809792  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/certs/key.pem (1675 bytes)
	I0408 12:48:09.809851  433439 certs.go:484] found cert: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem (1708 bytes)
	I0408 12:48:09.810516  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 12:48:09.866085  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 12:48:09.899718  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 12:48:09.941704  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0408 12:48:09.976180  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0408 12:48:10.014420  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 12:48:10.044380  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 12:48:10.072034  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/default-k8s-diff-port-527454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 12:48:10.099417  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/ssl/certs/3758172.pem --> /usr/share/ca-certificates/3758172.pem (1708 bytes)
	I0408 12:48:10.126143  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 12:48:10.154244  433439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18588-368424/.minikube/certs/375817.pem --> /usr/share/ca-certificates/375817.pem (1338 bytes)
	I0408 12:48:10.183954  433439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0408 12:48:10.207277  433439 ssh_runner.go:195] Run: openssl version
	I0408 12:48:10.213691  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3758172.pem && ln -fs /usr/share/ca-certificates/3758172.pem /etc/ssl/certs/3758172.pem"
	I0408 12:48:10.228406  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233736  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 11:30 /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.233798  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3758172.pem
	I0408 12:48:10.240236  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3758172.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 12:48:10.253382  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 12:48:10.267783  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273234  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 11:21 /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.273318  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 12:48:10.279925  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 12:48:10.292710  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/375817.pem && ln -fs /usr/share/ca-certificates/375817.pem /etc/ssl/certs/375817.pem"
	I0408 12:48:10.305381  433439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310629  433439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 11:30 /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.310703  433439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375817.pem
	I0408 12:48:10.317063  433439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/375817.pem /etc/ssl/certs/51391683.0"
	I0408 12:48:10.330320  433439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 12:48:10.336138  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0408 12:48:10.343341  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0408 12:48:10.350536  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0408 12:48:10.357665  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0408 12:48:10.364925  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0408 12:48:10.372314  433439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0408 12:48:10.380001  433439 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-527454 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-527454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 12:48:10.380107  433439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0408 12:48:10.380174  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.425378  433439 cri.go:89] found id: ""
	I0408 12:48:10.425475  433439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0408 12:48:10.438972  433439 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0408 12:48:10.439000  433439 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0408 12:48:10.439005  433439 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0408 12:48:10.439051  433439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0408 12:48:10.452072  433439 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:48:10.453410  433439 kubeconfig.go:125] found "default-k8s-diff-port-527454" server: "https://192.168.50.7:8444"
	I0408 12:48:10.456022  433439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0408 12:48:10.469116  433439 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0408 12:48:10.469171  433439 kubeadm.go:1154] stopping kube-system containers ...
	I0408 12:48:10.469188  433439 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0408 12:48:10.469256  433439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0408 12:48:10.517874  433439 cri.go:89] found id: ""
	I0408 12:48:10.517969  433439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0408 12:48:10.538088  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:48:10.551560  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:48:10.551580  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:48:10.551636  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:48:10.564123  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:48:10.564209  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:48:10.578691  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:48:10.590692  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:48:10.590765  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:48:10.602902  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.616831  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:48:10.616922  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:48:10.629213  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:48:10.641625  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:48:10.641709  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:48:10.653162  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:48:10.665261  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:10.811712  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.107002  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.606976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:12.188805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:14.221750  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:11.928656  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.427975  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.927923  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.428494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.928608  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.427852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:14.927874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.427855  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:15.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:16.427929  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:11.901885  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.09013292s)
	I0408 12:48:11.975836  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.237051  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.329550  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:12.460345  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:48:12.460457  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:12.961443  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.460681  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:13.520828  433439 api_server.go:72] duration metric: took 1.060470201s to wait for apiserver process to appear ...
	I0408 12:48:13.520866  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:48:13.520899  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:13.521407  433439 api_server.go:269] stopped: https://192.168.50.7:8444/healthz: Get "https://192.168.50.7:8444/healthz": dial tcp 192.168.50.7:8444: connect: connection refused
	I0408 12:48:14.022007  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.564485  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.564526  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:16.564543  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:16.617870  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0408 12:48:16.617904  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0408 12:48:17.021290  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.026545  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.026578  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:17.521124  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:17.529552  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0408 12:48:17.529596  433439 api_server.go:103] status: https://192.168.50.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0408 12:48:18.021125  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:48:18.037000  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:48:18.049656  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:48:18.049699  433439 api_server.go:131] duration metric: took 4.528823991s to wait for apiserver health ...
	I0408 12:48:18.049722  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:48:18.049730  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:48:18.051495  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:48:16.607222  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:18.607837  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.717612  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:19.217050  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:16.928269  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.427867  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:17.927873  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.428658  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.928649  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.428746  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:19.928734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.427874  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:20.927842  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:21.427823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:18.052916  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:48:18.072115  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:48:18.111408  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:48:18.130585  433439 system_pods.go:59] 8 kube-system pods found
	I0408 12:48:18.130629  433439 system_pods.go:61] "coredns-76f75df574-r99kj" [171e271b-eec6-4238-afb1-82a2f228c225] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0408 12:48:18.130641  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [7019f1eb-58ef-4b1f-acf3-ed3c1ed84623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0408 12:48:18.130651  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [80ccd16d-d883-4c92-bb13-abe2d412532c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0408 12:48:18.130661  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [78d513aa-1f24-42c0-bfb9-4c20fdee63f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0408 12:48:18.130669  433439 system_pods.go:61] "kube-proxy-ztmmc" [de09a26e-cd95-401a-b575-977fcd660c47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0408 12:48:18.130683  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [eac4d549-1763-45b8-be11-b3b9e83f5110] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0408 12:48:18.130702  433439 system_pods.go:61] "metrics-server-57f55c9bc5-44qbm" [52631fc6-84d0-443b-ba42-de35a65db0f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:48:18.130714  433439 system_pods.go:61] "storage-provisioner" [82e8b0d0-6c22-4644-8bd1-b48887b0fe82] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0408 12:48:18.130730  433439 system_pods.go:74] duration metric: took 19.293309ms to wait for pod list to return data ...
	I0408 12:48:18.130745  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:48:18.135625  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:48:18.135663  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:48:18.135679  433439 node_conditions.go:105] duration metric: took 4.924641ms to run NodePressure ...
	I0408 12:48:18.135724  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0408 12:48:18.416272  433439 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424302  433439 kubeadm.go:733] kubelet initialised
	I0408 12:48:18.424325  433439 kubeadm.go:734] duration metric: took 8.015642ms waiting for restarted kubelet to initialise ...
	I0408 12:48:18.424342  433439 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:48:18.436706  433439 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.447063  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447102  433439 pod_ready.go:81] duration metric: took 10.361708ms for pod "coredns-76f75df574-r99kj" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.447116  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "coredns-76f75df574-r99kj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.447126  433439 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.460464  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460496  433439 pod_ready.go:81] duration metric: took 13.357612ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.460513  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.460523  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.469991  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470035  433439 pod_ready.go:81] duration metric: took 9.502493ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.470072  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.470083  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.516886  433439 pod_ready.go:97] node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516920  433439 pod_ready.go:81] duration metric: took 46.823396ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	E0408 12:48:18.516933  433439 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-527454" hosting pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-527454" has status "Ready":"False"
	I0408 12:48:18.516940  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915101  433439 pod_ready.go:92] pod "kube-proxy-ztmmc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:18.915131  433439 pod_ready.go:81] duration metric: took 398.182437ms for pod "kube-proxy-ztmmc" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:18.915144  433439 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:20.922456  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.107083  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.108249  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.219995  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:23.718091  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:21.928654  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.428887  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.928103  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.428482  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:23.928236  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.428613  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:24.928054  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.428566  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:25.927852  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.428729  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:22.922607  433439 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:24.922155  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:48:24.922185  433439 pod_ready.go:81] duration metric: took 6.007031338s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:24.922200  433439 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	I0408 12:48:25.607653  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.216429  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:28.218553  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.717516  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:26.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:27.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.427853  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:28.928281  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.428354  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:29.928419  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.427934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:30.927823  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.427840  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:26.931412  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:29.430930  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:30.608369  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:33.107424  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:32.717551  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.216256  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:31.928618  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:32.928067  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.428776  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:33.928583  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.428774  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:34.928033  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.428825  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:35.928696  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.428311  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:31.931958  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:34.430950  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:35.607018  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.607820  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:40.106361  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:37.217721  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:39.218016  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:36.928915  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.427831  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:37.927946  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.428554  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:38.928429  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.428001  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:39.927802  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.427845  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:40.928013  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:41.428569  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:36.929987  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:38.931900  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.429986  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:42.605609  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:44.606744  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.717196  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:43.718405  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:41.928955  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.428794  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:42.927856  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.428217  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.928796  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.428756  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:44.927829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.428563  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:45.927812  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:46.427854  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:43.430411  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:45.932993  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.607058  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.607716  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.216568  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:48.218325  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.718153  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:46.928607  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.427829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:47.928499  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.428241  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:48.928393  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.428488  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:49.927941  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.428003  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:50.928815  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:50.928888  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:50.970680  433881 cri.go:89] found id: ""
	I0408 12:48:50.970713  433881 logs.go:276] 0 containers: []
	W0408 12:48:50.970725  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:50.970733  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:50.970799  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:51.009804  433881 cri.go:89] found id: ""
	I0408 12:48:51.009838  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.009848  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:51.009854  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:51.009909  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:51.049581  433881 cri.go:89] found id: ""
	I0408 12:48:51.049617  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.049626  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:51.049633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:51.049706  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:51.086286  433881 cri.go:89] found id: ""
	I0408 12:48:51.086314  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.086323  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:51.086329  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:51.086395  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:51.126888  433881 cri.go:89] found id: ""
	I0408 12:48:51.126916  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.126927  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:51.126935  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:51.126998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:51.168650  433881 cri.go:89] found id: ""
	I0408 12:48:51.168684  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.168695  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:51.168702  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:51.168759  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:51.205661  433881 cri.go:89] found id: ""
	I0408 12:48:51.205693  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.205706  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:51.205714  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:51.205782  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:51.245659  433881 cri.go:89] found id: ""
	I0408 12:48:51.245699  433881 logs.go:276] 0 containers: []
	W0408 12:48:51.245711  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:51.245725  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:51.245742  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:51.310079  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:51.310120  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:51.354093  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:51.354124  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:51.405031  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:51.405074  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:51.421147  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:51.421183  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:51.547658  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:48.430488  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:50.432250  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:51.106453  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.606447  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:53.217434  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:55.717265  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.047880  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:54.062872  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:54.062960  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:54.109041  433881 cri.go:89] found id: ""
	I0408 12:48:54.109068  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.109079  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:54.109087  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:54.109209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:54.150194  433881 cri.go:89] found id: ""
	I0408 12:48:54.150223  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.150231  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:54.150237  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:54.150292  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:54.191735  433881 cri.go:89] found id: ""
	I0408 12:48:54.191767  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.191785  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:54.191792  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:54.191872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:54.251766  433881 cri.go:89] found id: ""
	I0408 12:48:54.251798  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.251807  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:54.251813  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:54.251878  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:54.292179  433881 cri.go:89] found id: ""
	I0408 12:48:54.292215  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.292229  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:54.292237  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:54.292311  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:54.329338  433881 cri.go:89] found id: ""
	I0408 12:48:54.329368  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.329380  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:54.329389  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:54.329458  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:54.377094  433881 cri.go:89] found id: ""
	I0408 12:48:54.377132  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.377144  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:54.377153  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:54.377227  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:54.415835  433881 cri.go:89] found id: ""
	I0408 12:48:54.415865  433881 logs.go:276] 0 containers: []
	W0408 12:48:54.415873  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:54.415884  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:54.415896  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:54.471985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:54.472040  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:54.487674  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:54.487727  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:54.575138  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:54.575161  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:54.575176  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:54.647315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:54.647364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:52.928902  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:54.931253  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:56.106505  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.108187  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:58.218754  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.718600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:57.189969  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:48:57.204122  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:48:57.204201  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:48:57.241210  433881 cri.go:89] found id: ""
	I0408 12:48:57.241243  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.241252  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:48:57.241258  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:48:57.241310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:48:57.279553  433881 cri.go:89] found id: ""
	I0408 12:48:57.279591  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.279600  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:48:57.279606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:48:57.279658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:48:57.323516  433881 cri.go:89] found id: ""
	I0408 12:48:57.323560  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.323585  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:48:57.323593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:48:57.323663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:48:57.363723  433881 cri.go:89] found id: ""
	I0408 12:48:57.363755  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.363766  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:48:57.363772  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:48:57.363839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:48:57.400144  433881 cri.go:89] found id: ""
	I0408 12:48:57.400178  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.400190  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:48:57.400208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:48:57.400274  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:48:57.441875  433881 cri.go:89] found id: ""
	I0408 12:48:57.441907  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.441919  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:48:57.441928  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:48:57.441999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:48:57.478024  433881 cri.go:89] found id: ""
	I0408 12:48:57.478057  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.478066  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:48:57.478074  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:48:57.478144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:48:57.516602  433881 cri.go:89] found id: ""
	I0408 12:48:57.516633  433881 logs.go:276] 0 containers: []
	W0408 12:48:57.516642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:48:57.516652  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:48:57.516666  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:48:57.573832  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:48:57.573883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:48:57.590751  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:48:57.590793  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:48:57.670650  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:48:57.670679  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:48:57.670698  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:48:57.746440  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:48:57.746488  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:00.291359  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:00.306024  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:00.306116  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:00.352262  433881 cri.go:89] found id: ""
	I0408 12:49:00.352294  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.352305  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:00.352314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:00.352390  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:00.392371  433881 cri.go:89] found id: ""
	I0408 12:49:00.392403  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.392415  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:00.392423  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:00.392488  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:00.434848  433881 cri.go:89] found id: ""
	I0408 12:49:00.434876  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.434885  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:00.434892  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:00.434951  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:00.476998  433881 cri.go:89] found id: ""
	I0408 12:49:00.477032  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.477045  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:00.477054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:00.477128  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:00.514520  433881 cri.go:89] found id: ""
	I0408 12:49:00.514560  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.514569  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:00.514575  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:00.514643  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:00.555942  433881 cri.go:89] found id: ""
	I0408 12:49:00.555981  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.555996  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:00.556005  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:00.556074  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:00.603944  433881 cri.go:89] found id: ""
	I0408 12:49:00.604053  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.604079  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:00.604097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:00.604193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:00.660591  433881 cri.go:89] found id: ""
	I0408 12:49:00.660628  433881 logs.go:276] 0 containers: []
	W0408 12:49:00.660642  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:00.660655  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:00.660677  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:00.731774  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:00.731821  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:00.747891  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:00.747947  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:00.827051  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:00.827085  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:00.827100  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:00.907231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:00.907280  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:48:57.431032  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:48:59.930470  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:00.608450  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.106647  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.218064  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:03.460014  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:03.474615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:03.474716  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:03.513072  433881 cri.go:89] found id: ""
	I0408 12:49:03.513106  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.513115  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:03.513122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:03.513179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:03.549307  433881 cri.go:89] found id: ""
	I0408 12:49:03.549349  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.549358  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:03.549364  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:03.549508  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:03.587463  433881 cri.go:89] found id: ""
	I0408 12:49:03.587503  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.587516  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:03.587524  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:03.587601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:03.628171  433881 cri.go:89] found id: ""
	I0408 12:49:03.628202  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.628211  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:03.628217  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:03.628284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:03.663630  433881 cri.go:89] found id: ""
	I0408 12:49:03.663661  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.663672  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:03.663680  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:03.663762  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:03.704078  433881 cri.go:89] found id: ""
	I0408 12:49:03.704112  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.704124  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:03.704134  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:03.704202  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:03.744820  433881 cri.go:89] found id: ""
	I0408 12:49:03.744856  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.744868  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:03.744877  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:03.744945  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:03.785826  433881 cri.go:89] found id: ""
	I0408 12:49:03.785855  433881 logs.go:276] 0 containers: []
	W0408 12:49:03.785868  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:03.785878  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:03.785905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:03.800987  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:03.801019  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:03.882870  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:03.882905  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:03.882924  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:03.967335  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:03.967382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:04.008319  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:04.008348  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:06.562156  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:06.579058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:06.579137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:01.933210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:04.428894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.428974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:05.606895  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:08.106819  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:07.718023  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:06.635302  433881 cri.go:89] found id: ""
	I0408 12:49:06.635333  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.635345  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:06.635353  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:06.635422  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:06.696626  433881 cri.go:89] found id: ""
	I0408 12:49:06.696675  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.696692  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:06.696700  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:06.696769  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:06.738555  433881 cri.go:89] found id: ""
	I0408 12:49:06.738589  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.738601  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:06.738610  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:06.738675  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:06.780471  433881 cri.go:89] found id: ""
	I0408 12:49:06.780507  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.780516  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:06.780522  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:06.780587  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:06.823514  433881 cri.go:89] found id: ""
	I0408 12:49:06.823558  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.823571  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:06.823580  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:06.823671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:06.863990  433881 cri.go:89] found id: ""
	I0408 12:49:06.864029  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.864045  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:06.864055  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:06.864123  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:06.905383  433881 cri.go:89] found id: ""
	I0408 12:49:06.905419  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.905432  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:06.905440  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:06.905510  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:06.947761  433881 cri.go:89] found id: ""
	I0408 12:49:06.947792  433881 logs.go:276] 0 containers: []
	W0408 12:49:06.947805  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:06.947814  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:06.947826  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:06.988895  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:06.988930  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:07.043205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:07.043251  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:07.057788  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:07.057823  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:07.137854  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:07.137884  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:07.137903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:09.724678  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:09.739337  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:09.739408  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:09.777803  433881 cri.go:89] found id: ""
	I0408 12:49:09.777837  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.777848  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:09.777857  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:09.777934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:09.818101  433881 cri.go:89] found id: ""
	I0408 12:49:09.818132  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.818144  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:09.818152  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:09.818220  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:09.860148  433881 cri.go:89] found id: ""
	I0408 12:49:09.860186  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.860211  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:09.860218  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:09.860284  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:09.899008  433881 cri.go:89] found id: ""
	I0408 12:49:09.899042  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.899054  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:09.899063  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:09.899130  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:09.938235  433881 cri.go:89] found id: ""
	I0408 12:49:09.938270  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.938281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:09.938290  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:09.938361  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:09.977404  433881 cri.go:89] found id: ""
	I0408 12:49:09.977438  433881 logs.go:276] 0 containers: []
	W0408 12:49:09.977447  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:09.977454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:09.977505  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:10.015959  433881 cri.go:89] found id: ""
	I0408 12:49:10.015992  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.016008  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:10.016015  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:10.016083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:10.055723  433881 cri.go:89] found id: ""
	I0408 12:49:10.055753  433881 logs.go:276] 0 containers: []
	W0408 12:49:10.055762  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:10.055771  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:10.055785  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:10.131028  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:10.131061  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:10.131079  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:10.213484  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:10.213528  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:10.261403  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:10.261554  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:10.316130  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:10.316189  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:08.429894  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.930925  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:10.609607  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:13.106296  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.716182  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.717779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:12.832344  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:12.846324  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:12.846446  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:12.883721  433881 cri.go:89] found id: ""
	I0408 12:49:12.883761  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.883776  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:12.883784  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:12.883850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:12.922869  433881 cri.go:89] found id: ""
	I0408 12:49:12.922903  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.922914  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:12.922923  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:12.922989  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:12.965672  433881 cri.go:89] found id: ""
	I0408 12:49:12.965711  433881 logs.go:276] 0 containers: []
	W0408 12:49:12.965723  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:12.965731  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:12.965804  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:13.005430  433881 cri.go:89] found id: ""
	I0408 12:49:13.005466  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.005479  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:13.005494  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:13.005556  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:13.047068  433881 cri.go:89] found id: ""
	I0408 12:49:13.047095  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.047103  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:13.047110  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:13.047175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:13.085014  433881 cri.go:89] found id: ""
	I0408 12:49:13.085047  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.085058  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:13.085067  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:13.085134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:13.122582  433881 cri.go:89] found id: ""
	I0408 12:49:13.122621  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.122633  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:13.122643  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:13.122707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:13.159159  433881 cri.go:89] found id: ""
	I0408 12:49:13.159190  433881 logs.go:276] 0 containers: []
	W0408 12:49:13.159199  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:13.159209  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:13.159221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:13.211508  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:13.211553  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:13.228228  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:13.228265  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:13.306379  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:13.306419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:13.306437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:13.383403  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:13.383462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:15.933673  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:15.947963  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:15.948039  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:15.988497  433881 cri.go:89] found id: ""
	I0408 12:49:15.988526  433881 logs.go:276] 0 containers: []
	W0408 12:49:15.988534  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:15.988541  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:15.988605  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:16.026695  433881 cri.go:89] found id: ""
	I0408 12:49:16.026733  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.026758  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:16.026766  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:16.026850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:16.072415  433881 cri.go:89] found id: ""
	I0408 12:49:16.072452  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.072487  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:16.072498  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:16.072576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:16.111534  433881 cri.go:89] found id: ""
	I0408 12:49:16.111563  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.111575  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:16.111583  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:16.111653  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:16.151515  433881 cri.go:89] found id: ""
	I0408 12:49:16.151550  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.151562  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:16.151572  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:16.151640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:16.189055  433881 cri.go:89] found id: ""
	I0408 12:49:16.189085  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.189094  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:16.189101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:16.189153  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:16.226759  433881 cri.go:89] found id: ""
	I0408 12:49:16.226790  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.226800  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:16.226807  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:16.226860  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:16.269035  433881 cri.go:89] found id: ""
	I0408 12:49:16.269068  433881 logs.go:276] 0 containers: []
	W0408 12:49:16.269079  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:16.269092  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:16.269110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:16.322426  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:16.322472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:16.337670  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:16.337704  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:16.422746  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:16.422777  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:16.422795  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:16.508089  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:16.508140  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:12.931911  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:14.933011  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:15.607174  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:18.106346  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:17.216822  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.216874  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.055162  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:19.069970  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:19.070044  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:19.110031  433881 cri.go:89] found id: ""
	I0408 12:49:19.110062  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.110070  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:19.110077  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:19.110125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:19.150644  433881 cri.go:89] found id: ""
	I0408 12:49:19.150681  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.150693  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:19.150702  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:19.150770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:19.193032  433881 cri.go:89] found id: ""
	I0408 12:49:19.193064  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.193076  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:19.193084  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:19.193157  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:19.230634  433881 cri.go:89] found id: ""
	I0408 12:49:19.230661  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.230670  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:19.230676  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:19.230727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:19.269083  433881 cri.go:89] found id: ""
	I0408 12:49:19.269116  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.269125  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:19.269132  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:19.269183  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:19.309072  433881 cri.go:89] found id: ""
	I0408 12:49:19.309105  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.309117  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:19.309126  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:19.309208  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:19.349582  433881 cri.go:89] found id: ""
	I0408 12:49:19.349613  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.349622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:19.349633  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:19.349687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:19.388015  433881 cri.go:89] found id: ""
	I0408 12:49:19.388046  433881 logs.go:276] 0 containers: []
	W0408 12:49:19.388053  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:19.388062  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:19.388076  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:19.469726  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:19.469750  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:19.469766  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:19.551098  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:19.551138  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:19.595343  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:19.595377  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:19.655983  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:19.656031  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:17.429653  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:19.432135  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:20.609415  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.105576  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:25.106666  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:21.217932  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.720613  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:22.172109  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:22.187123  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:22.187197  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:22.227242  433881 cri.go:89] found id: ""
	I0408 12:49:22.227269  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.227277  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:22.227283  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:22.227344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:22.266238  433881 cri.go:89] found id: ""
	I0408 12:49:22.266270  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.266279  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:22.266285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:22.266345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:22.304245  433881 cri.go:89] found id: ""
	I0408 12:49:22.304273  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.304281  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:22.304288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:22.304344  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:22.348994  433881 cri.go:89] found id: ""
	I0408 12:49:22.349035  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.349048  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:22.349058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:22.349134  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:22.389590  433881 cri.go:89] found id: ""
	I0408 12:49:22.389622  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.389631  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:22.389638  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:22.389708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:22.425775  433881 cri.go:89] found id: ""
	I0408 12:49:22.425809  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.425821  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:22.425830  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:22.425898  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:22.468155  433881 cri.go:89] found id: ""
	I0408 12:49:22.468184  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.468192  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:22.468198  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:22.468250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:22.507866  433881 cri.go:89] found id: ""
	I0408 12:49:22.507906  433881 logs.go:276] 0 containers: []
	W0408 12:49:22.507915  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:22.507934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:22.507953  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:22.559847  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:22.559893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:22.575153  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:22.575188  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:22.656324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:22.656354  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:22.656372  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:22.737542  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:22.737589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.282655  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:25.296701  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:25.296770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:25.337101  433881 cri.go:89] found id: ""
	I0408 12:49:25.337141  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.337152  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:25.337161  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:25.337228  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:25.376383  433881 cri.go:89] found id: ""
	I0408 12:49:25.376453  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.376467  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:25.376481  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:25.376576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:25.415819  433881 cri.go:89] found id: ""
	I0408 12:49:25.415852  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.415865  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:25.415873  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:25.415941  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:25.457500  433881 cri.go:89] found id: ""
	I0408 12:49:25.457549  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.457560  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:25.457568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:25.457652  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:25.497132  433881 cri.go:89] found id: ""
	I0408 12:49:25.497172  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.497185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:25.497194  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:25.497265  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:25.542721  433881 cri.go:89] found id: ""
	I0408 12:49:25.542754  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.542765  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:25.542773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:25.542842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:25.583815  433881 cri.go:89] found id: ""
	I0408 12:49:25.583858  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.583869  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:25.583876  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:25.583931  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:25.623484  433881 cri.go:89] found id: ""
	I0408 12:49:25.623519  433881 logs.go:276] 0 containers: []
	W0408 12:49:25.623530  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:25.623544  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:25.623562  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:25.674250  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:25.674286  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:25.735433  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:25.735477  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:25.750760  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:25.750792  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:25.830122  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:25.830158  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:25.830192  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:21.929027  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:23.933879  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.429452  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:27.106798  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:29.605690  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:26.216525  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.216788  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.217600  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:28.418059  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:28.434568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:28.434627  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.479914  433881 cri.go:89] found id: ""
	I0408 12:49:28.479956  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.479968  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:28.479977  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:28.480052  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:28.526249  433881 cri.go:89] found id: ""
	I0408 12:49:28.526282  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.526305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:28.526314  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:28.526403  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:28.564561  433881 cri.go:89] found id: ""
	I0408 12:49:28.564595  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.564606  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:28.564613  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:28.564666  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:28.606416  433881 cri.go:89] found id: ""
	I0408 12:49:28.606456  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.606469  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:28.606478  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:28.606545  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:28.649847  433881 cri.go:89] found id: ""
	I0408 12:49:28.649880  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.649915  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:28.649925  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:28.650014  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:28.690548  433881 cri.go:89] found id: ""
	I0408 12:49:28.690587  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.690600  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:28.690609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:28.690681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:28.730123  433881 cri.go:89] found id: ""
	I0408 12:49:28.730159  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.730170  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:28.730179  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:28.730250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:28.771147  433881 cri.go:89] found id: ""
	I0408 12:49:28.771192  433881 logs.go:276] 0 containers: []
	W0408 12:49:28.771205  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:28.771220  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:28.771238  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:28.856250  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:28.856273  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:28.856301  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:28.941925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:28.941982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:29.003853  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:29.003893  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:29.057957  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:29.058004  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.573734  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:31.588485  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:31.588551  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:28.433974  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:30.930607  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.606729  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.107220  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:32.218719  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:34.718165  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:31.625072  433881 cri.go:89] found id: ""
	I0408 12:49:31.625100  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.625108  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:31.625114  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:31.625175  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:31.662716  433881 cri.go:89] found id: ""
	I0408 12:49:31.662752  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.662763  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:31.662772  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:31.662839  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:31.701551  433881 cri.go:89] found id: ""
	I0408 12:49:31.701588  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.701596  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:31.701603  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:31.701687  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:31.741857  433881 cri.go:89] found id: ""
	I0408 12:49:31.741888  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.741900  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:31.741908  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:31.741973  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:31.782209  433881 cri.go:89] found id: ""
	I0408 12:49:31.782240  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.782252  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:31.782259  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:31.782347  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:31.820207  433881 cri.go:89] found id: ""
	I0408 12:49:31.820261  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.820283  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:31.820297  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:31.820362  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:31.858445  433881 cri.go:89] found id: ""
	I0408 12:49:31.858482  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.858495  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:31.858504  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:31.858580  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:31.899017  433881 cri.go:89] found id: ""
	I0408 12:49:31.899052  433881 logs.go:276] 0 containers: []
	W0408 12:49:31.899070  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:31.899084  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:31.899102  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:31.956200  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:31.956239  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:31.971940  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:31.971982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:32.049548  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:32.049578  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:32.049596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:32.136320  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:32.136366  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:34.684997  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:34.700097  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:34.700185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:34.757577  433881 cri.go:89] found id: ""
	I0408 12:49:34.757669  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.757686  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:34.757696  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:34.757792  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:34.798151  433881 cri.go:89] found id: ""
	I0408 12:49:34.798188  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.798196  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:34.798203  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:34.798266  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:34.835735  433881 cri.go:89] found id: ""
	I0408 12:49:34.835774  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.835786  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:34.835794  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:34.835862  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:34.875311  433881 cri.go:89] found id: ""
	I0408 12:49:34.875345  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.875359  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:34.875368  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:34.875484  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:34.916118  433881 cri.go:89] found id: ""
	I0408 12:49:34.916148  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.916159  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:34.916167  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:34.916233  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:34.961197  433881 cri.go:89] found id: ""
	I0408 12:49:34.961234  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.961246  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:34.961254  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:34.961314  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:34.999553  433881 cri.go:89] found id: ""
	I0408 12:49:34.999590  433881 logs.go:276] 0 containers: []
	W0408 12:49:34.999598  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:34.999606  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:34.999671  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:35.038204  433881 cri.go:89] found id: ""
	I0408 12:49:35.038244  433881 logs.go:276] 0 containers: []
	W0408 12:49:35.038254  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:35.038265  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:35.038277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:35.118925  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:35.118982  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:35.164584  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:35.164631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:35.216654  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:35.216694  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:35.232506  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:35.232544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:35.304615  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:33.429854  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:35.933211  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:36.605433  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:38.606014  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.217818  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:39.717250  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:37.805529  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:37.821463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:37.821550  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:37.860644  433881 cri.go:89] found id: ""
	I0408 12:49:37.860683  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.860700  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:37.860709  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:37.860781  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:37.899995  433881 cri.go:89] found id: ""
	I0408 12:49:37.900034  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.900042  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:37.900048  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:37.900111  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:37.939562  433881 cri.go:89] found id: ""
	I0408 12:49:37.939584  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.939592  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:37.939599  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:37.939668  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:37.977990  433881 cri.go:89] found id: ""
	I0408 12:49:37.978021  433881 logs.go:276] 0 containers: []
	W0408 12:49:37.978033  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:37.978042  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:37.978113  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:38.014506  433881 cri.go:89] found id: ""
	I0408 12:49:38.014537  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.014551  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:38.014559  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:38.014639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:38.049714  433881 cri.go:89] found id: ""
	I0408 12:49:38.049751  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.049764  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:38.049773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:38.049842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:38.089931  433881 cri.go:89] found id: ""
	I0408 12:49:38.089978  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.089987  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:38.089993  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:38.090058  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:38.127674  433881 cri.go:89] found id: ""
	I0408 12:49:38.127715  433881 logs.go:276] 0 containers: []
	W0408 12:49:38.127727  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:38.127738  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:38.127759  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.144170  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:38.144203  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:38.225864  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:38.225885  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:38.225899  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:38.309289  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:38.309334  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:38.351666  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:38.351724  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:40.910064  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:40.926264  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:40.926350  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:40.973110  433881 cri.go:89] found id: ""
	I0408 12:49:40.973138  433881 logs.go:276] 0 containers: []
	W0408 12:49:40.973146  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:40.973152  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:40.973209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:41.014643  433881 cri.go:89] found id: ""
	I0408 12:49:41.014675  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.014688  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:41.014696  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:41.014761  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:41.054414  433881 cri.go:89] found id: ""
	I0408 12:49:41.054446  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.054461  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:41.054469  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:41.054543  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:41.094835  433881 cri.go:89] found id: ""
	I0408 12:49:41.094867  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.094876  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:41.094883  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:41.094943  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:41.153654  433881 cri.go:89] found id: ""
	I0408 12:49:41.153684  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.153693  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:41.153699  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:41.153751  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:41.196170  433881 cri.go:89] found id: ""
	I0408 12:49:41.196198  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.196209  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:41.196215  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:41.196277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:41.261374  433881 cri.go:89] found id: ""
	I0408 12:49:41.261412  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.261423  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:41.261432  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:41.261500  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:41.300491  433881 cri.go:89] found id: ""
	I0408 12:49:41.300523  433881 logs.go:276] 0 containers: []
	W0408 12:49:41.300532  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:41.300546  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:41.300559  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:41.373813  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:41.373843  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:41.373860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:41.449773  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:41.449819  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:41.498826  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:41.498862  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:41.552736  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:41.552780  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:38.431584  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:40.930328  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.106567  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:43.606770  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:41.718244  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.218855  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:44.068653  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:44.083655  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:44.083756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:44.124068  433881 cri.go:89] found id: ""
	I0408 12:49:44.124101  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.124113  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:44.124122  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:44.124193  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:44.160898  433881 cri.go:89] found id: ""
	I0408 12:49:44.160936  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.160950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:44.160958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:44.161032  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:44.196503  433881 cri.go:89] found id: ""
	I0408 12:49:44.196532  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.196540  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:44.196547  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:44.196611  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:44.234604  433881 cri.go:89] found id: ""
	I0408 12:49:44.234644  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.234656  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:44.234664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:44.234720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:44.271243  433881 cri.go:89] found id: ""
	I0408 12:49:44.271283  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.271297  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:44.271306  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:44.271369  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:44.308504  433881 cri.go:89] found id: ""
	I0408 12:49:44.308543  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.308571  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:44.308581  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:44.308644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:44.345662  433881 cri.go:89] found id: ""
	I0408 12:49:44.345703  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.345716  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:44.345725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:44.345786  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:44.384785  433881 cri.go:89] found id: ""
	I0408 12:49:44.384816  433881 logs.go:276] 0 containers: []
	W0408 12:49:44.384826  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:44.384845  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:44.384863  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:44.429253  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:44.429283  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:44.485160  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:44.485201  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:44.502996  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:44.503033  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:44.581921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:44.581946  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:44.581964  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:43.428915  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:45.430859  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.106078  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.108320  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:46.718065  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:48.721772  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:47.167101  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:47.183406  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:47.183475  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:47.244266  433881 cri.go:89] found id: ""
	I0408 12:49:47.244295  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.244306  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:47.244314  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:47.244379  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:47.285538  433881 cri.go:89] found id: ""
	I0408 12:49:47.285575  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.285588  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:47.285597  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:47.285673  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:47.323634  433881 cri.go:89] found id: ""
	I0408 12:49:47.323670  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.323679  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:47.323707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:47.323791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:47.362737  433881 cri.go:89] found id: ""
	I0408 12:49:47.362774  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.362787  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:47.362795  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:47.362856  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:47.403914  433881 cri.go:89] found id: ""
	I0408 12:49:47.403947  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.403958  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:47.403967  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:47.404035  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:47.445470  433881 cri.go:89] found id: ""
	I0408 12:49:47.445506  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.445521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:47.445530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:47.445598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:47.482633  433881 cri.go:89] found id: ""
	I0408 12:49:47.482669  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.482680  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:47.482689  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:47.482760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:47.521404  433881 cri.go:89] found id: ""
	I0408 12:49:47.521441  433881 logs.go:276] 0 containers: []
	W0408 12:49:47.521456  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:47.521469  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:47.521486  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:47.597247  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:47.597270  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:47.597284  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:47.678765  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:47.678805  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.721463  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:47.721502  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:47.780430  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:47.780472  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.295320  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:50.312212  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:50.312293  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:50.355987  433881 cri.go:89] found id: ""
	I0408 12:49:50.356022  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.356034  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:50.356043  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:50.356118  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:50.399662  433881 cri.go:89] found id: ""
	I0408 12:49:50.399714  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.399726  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:50.399735  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:50.399798  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:50.441718  433881 cri.go:89] found id: ""
	I0408 12:49:50.441753  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.441764  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:50.441773  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:50.441846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:50.485588  433881 cri.go:89] found id: ""
	I0408 12:49:50.485624  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.485634  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:50.485641  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:50.485703  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:50.524897  433881 cri.go:89] found id: ""
	I0408 12:49:50.524929  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.524937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:50.524943  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:50.524998  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:50.561337  433881 cri.go:89] found id: ""
	I0408 12:49:50.561378  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.561388  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:50.561396  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:50.561468  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:50.603052  433881 cri.go:89] found id: ""
	I0408 12:49:50.603082  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.603092  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:50.603101  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:50.603169  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:50.643514  433881 cri.go:89] found id: ""
	I0408 12:49:50.643555  433881 logs.go:276] 0 containers: []
	W0408 12:49:50.643566  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:50.643576  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:50.643589  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:50.697346  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:50.697388  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:50.711982  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:50.712015  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:50.796665  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:50.796711  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:50.796731  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:50.873396  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:50.873438  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:47.432167  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:49.929922  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:50.606575  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.106564  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:51.217123  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.217785  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.217941  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:53.421458  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:53.435909  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:53.435975  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:53.478018  433881 cri.go:89] found id: ""
	I0408 12:49:53.478052  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.478063  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:53.478072  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:53.478138  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:53.518890  433881 cri.go:89] found id: ""
	I0408 12:49:53.518936  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.518950  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:53.518958  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:53.519047  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:53.554912  433881 cri.go:89] found id: ""
	I0408 12:49:53.554952  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.554964  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:53.554972  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:53.555042  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:53.592991  433881 cri.go:89] found id: ""
	I0408 12:49:53.593019  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.593028  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:53.593033  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:53.593088  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:53.631215  433881 cri.go:89] found id: ""
	I0408 12:49:53.631255  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.631269  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:53.631277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:53.631351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:53.669189  433881 cri.go:89] found id: ""
	I0408 12:49:53.669228  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.669248  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:53.669258  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:53.669322  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:53.709315  433881 cri.go:89] found id: ""
	I0408 12:49:53.709344  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.709353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:53.709359  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:53.709421  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:53.750869  433881 cri.go:89] found id: ""
	I0408 12:49:53.750910  433881 logs.go:276] 0 containers: []
	W0408 12:49:53.750922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:53.750934  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:53.750951  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:53.802734  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:53.802782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:53.819509  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:53.819546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:53.888733  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:53.888761  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:53.888782  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:53.972408  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:53.972448  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:56.517173  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:56.532357  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:56.532427  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:56.574068  433881 cri.go:89] found id: ""
	I0408 12:49:56.574109  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.574118  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:56.574129  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:56.574276  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:52.429230  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:54.929643  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:55.607214  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:58.109657  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:57.717805  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.219041  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:56.616853  433881 cri.go:89] found id: ""
	I0408 12:49:56.616885  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.616906  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:56.616915  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:56.616988  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:56.659097  433881 cri.go:89] found id: ""
	I0408 12:49:56.659125  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.659133  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:56.659139  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:56.659190  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:56.699222  433881 cri.go:89] found id: ""
	I0408 12:49:56.699262  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.699274  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:56.699283  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:56.699345  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:56.747017  433881 cri.go:89] found id: ""
	I0408 12:49:56.747055  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.747068  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:56.747076  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:56.747149  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:56.784988  433881 cri.go:89] found id: ""
	I0408 12:49:56.785028  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.785042  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:56.785058  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:56.785126  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:56.830280  433881 cri.go:89] found id: ""
	I0408 12:49:56.830320  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.830332  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:56.830340  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:56.830410  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:56.868643  433881 cri.go:89] found id: ""
	I0408 12:49:56.868678  433881 logs.go:276] 0 containers: []
	W0408 12:49:56.868686  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:56.868697  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:49:56.868713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:49:56.922497  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:49:56.922542  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:49:56.940550  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:49:56.940596  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:49:57.018640  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:49:57.018665  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:49:57.018680  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.096626  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:57.096681  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:49:59.638585  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:49:59.652384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:49:59.652466  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:49:59.692778  433881 cri.go:89] found id: ""
	I0408 12:49:59.692823  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.692837  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:49:59.692846  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:49:59.692906  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:49:59.732896  433881 cri.go:89] found id: ""
	I0408 12:49:59.732923  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.732933  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:49:59.732940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:49:59.732999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:49:59.774774  433881 cri.go:89] found id: ""
	I0408 12:49:59.774806  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.774814  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:49:59.774819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:49:59.774870  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:49:59.812919  433881 cri.go:89] found id: ""
	I0408 12:49:59.812959  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.812972  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:49:59.812980  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:49:59.813043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:49:59.848653  433881 cri.go:89] found id: ""
	I0408 12:49:59.848684  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.848695  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:49:59.848703  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:49:59.848772  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:49:59.883495  433881 cri.go:89] found id: ""
	I0408 12:49:59.883525  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.883537  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:49:59.883546  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:49:59.883625  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:49:59.925080  433881 cri.go:89] found id: ""
	I0408 12:49:59.925113  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.925122  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:49:59.925129  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:49:59.925182  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:49:59.967101  433881 cri.go:89] found id: ""
	I0408 12:49:59.967130  433881 logs.go:276] 0 containers: []
	W0408 12:49:59.967142  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:49:59.967152  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:49:59.967163  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:00.010507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:00.010546  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:00.063139  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:00.063182  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:00.079229  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:00.079266  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:00.155202  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:00.155235  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:00.155253  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:49:57.430097  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:49:59.430226  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:00.605915  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:03.106990  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.717304  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.717757  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:02.738934  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:02.752509  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:02.752593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:02.791178  433881 cri.go:89] found id: ""
	I0408 12:50:02.791212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.791222  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:02.791229  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:02.791301  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:02.834180  433881 cri.go:89] found id: ""
	I0408 12:50:02.834212  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.834225  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:02.834234  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:02.834296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:02.873513  433881 cri.go:89] found id: ""
	I0408 12:50:02.873551  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.873563  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:02.873573  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:02.873651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:02.921329  433881 cri.go:89] found id: ""
	I0408 12:50:02.921371  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.921384  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:02.921392  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:02.921517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:02.959940  433881 cri.go:89] found id: ""
	I0408 12:50:02.959970  433881 logs.go:276] 0 containers: []
	W0408 12:50:02.959980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:02.959988  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:02.960120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:03.001222  433881 cri.go:89] found id: ""
	I0408 12:50:03.001251  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.001259  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:03.001265  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:03.001317  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:03.043627  433881 cri.go:89] found id: ""
	I0408 12:50:03.043656  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.043666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:03.043671  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:03.043750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:03.083603  433881 cri.go:89] found id: ""
	I0408 12:50:03.083640  433881 logs.go:276] 0 containers: []
	W0408 12:50:03.083649  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:03.083660  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:03.083674  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:03.138300  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:03.138343  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:03.153439  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:03.153476  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:03.230230  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:03.230258  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:03.230277  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:03.312005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:03.312048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:05.851000  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:05.865533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:05.865601  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:05.905449  433881 cri.go:89] found id: ""
	I0408 12:50:05.905485  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.905495  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:05.905501  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:05.905570  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:05.952260  433881 cri.go:89] found id: ""
	I0408 12:50:05.952293  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.952305  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:05.952313  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:05.952384  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:05.993398  433881 cri.go:89] found id: ""
	I0408 12:50:05.993430  433881 logs.go:276] 0 containers: []
	W0408 12:50:05.993440  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:05.993446  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:05.993512  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:06.031484  433881 cri.go:89] found id: ""
	I0408 12:50:06.031527  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.031539  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:06.031551  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:06.031613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:06.067855  433881 cri.go:89] found id: ""
	I0408 12:50:06.067897  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.067910  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:06.067920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:06.067992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:06.108905  433881 cri.go:89] found id: ""
	I0408 12:50:06.108937  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.108949  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:06.108958  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:06.109010  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:06.147629  433881 cri.go:89] found id: ""
	I0408 12:50:06.147664  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.147674  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:06.147683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:06.147760  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:06.184250  433881 cri.go:89] found id: ""
	I0408 12:50:06.184287  433881 logs.go:276] 0 containers: []
	W0408 12:50:06.184298  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:06.184312  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:06.184329  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:06.239560  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:06.239606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:06.254746  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:06.254777  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:06.330423  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:06.330453  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:06.330471  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:06.410965  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:06.411017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:01.930407  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:04.429884  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:06.430557  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:05.605804  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.606737  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:10.107370  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:07.218275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:09.716548  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:08.958108  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:08.972557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:08.972626  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:09.026034  433881 cri.go:89] found id: ""
	I0408 12:50:09.026073  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.026081  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:09.026094  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:09.026145  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:09.063360  433881 cri.go:89] found id: ""
	I0408 12:50:09.063399  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.063411  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:09.063420  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:09.063509  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:09.101002  433881 cri.go:89] found id: ""
	I0408 12:50:09.101030  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.101039  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:09.101045  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:09.101104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:09.140794  433881 cri.go:89] found id: ""
	I0408 12:50:09.140830  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.140843  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:09.140852  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:09.140912  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:09.176889  433881 cri.go:89] found id: ""
	I0408 12:50:09.176927  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.176939  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:09.176947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:09.177013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:09.218687  433881 cri.go:89] found id: ""
	I0408 12:50:09.218719  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.218730  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:09.218739  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:09.218819  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:09.254509  433881 cri.go:89] found id: ""
	I0408 12:50:09.254542  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.254551  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:09.254557  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:09.254619  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:09.291313  433881 cri.go:89] found id: ""
	I0408 12:50:09.291341  433881 logs.go:276] 0 containers: []
	W0408 12:50:09.291349  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:09.291359  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:09.291382  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:09.342578  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:09.342625  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:09.359207  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:09.359236  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:09.434921  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:09.434945  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:09.434962  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:09.526672  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:09.526726  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:08.930029  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.429317  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.107556  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:14.606578  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:11.717001  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:13.717782  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.719875  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:12.075428  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:12.089920  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:12.089986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:12.128791  433881 cri.go:89] found id: ""
	I0408 12:50:12.128878  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.128895  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:12.128905  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:12.128979  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:12.166911  433881 cri.go:89] found id: ""
	I0408 12:50:12.166939  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.166947  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:12.166954  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:12.167005  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:12.205798  433881 cri.go:89] found id: ""
	I0408 12:50:12.205830  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.205839  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:12.205847  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:12.205905  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:12.242716  433881 cri.go:89] found id: ""
	I0408 12:50:12.242754  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.242764  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:12.242771  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:12.242825  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:12.279061  433881 cri.go:89] found id: ""
	I0408 12:50:12.279098  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.279109  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:12.279118  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:12.279187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:12.319510  433881 cri.go:89] found id: ""
	I0408 12:50:12.319538  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.319547  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:12.319554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:12.319610  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:12.357578  433881 cri.go:89] found id: ""
	I0408 12:50:12.357613  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.357625  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:12.357634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:12.357699  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:12.402895  433881 cri.go:89] found id: ""
	I0408 12:50:12.402931  433881 logs.go:276] 0 containers: []
	W0408 12:50:12.402944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:12.402958  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:12.402975  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:12.455885  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:12.455929  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:12.472119  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:12.472160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:12.551019  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:12.551041  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:12.551054  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:12.633560  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:12.633606  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.176459  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:15.191013  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:15.191083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:15.243825  433881 cri.go:89] found id: ""
	I0408 12:50:15.243852  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.243861  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:15.243867  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:15.243918  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:15.282768  433881 cri.go:89] found id: ""
	I0408 12:50:15.282803  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.282816  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:15.282824  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:15.282893  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:15.318418  433881 cri.go:89] found id: ""
	I0408 12:50:15.318447  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.318455  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:15.318463  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:15.318540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:15.354071  433881 cri.go:89] found id: ""
	I0408 12:50:15.354109  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.354125  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:15.354133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:15.354205  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:15.397142  433881 cri.go:89] found id: ""
	I0408 12:50:15.397176  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.397185  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:15.397191  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:15.397253  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:15.436798  433881 cri.go:89] found id: ""
	I0408 12:50:15.436832  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.436843  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:15.436851  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:15.436916  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:15.475792  433881 cri.go:89] found id: ""
	I0408 12:50:15.475823  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.475836  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:15.475844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:15.475917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:15.526277  433881 cri.go:89] found id: ""
	I0408 12:50:15.526323  433881 logs.go:276] 0 containers: []
	W0408 12:50:15.526335  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:15.526348  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:15.526365  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:15.601590  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:15.601616  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:15.601631  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:15.681784  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:15.681842  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:15.725300  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:15.725345  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:15.778579  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:15.778627  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:13.429712  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:15.430255  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:17.106153  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:19.607656  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.217812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.719543  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:18.296690  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:18.310554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:18.310623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:18.350635  433881 cri.go:89] found id: ""
	I0408 12:50:18.350673  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.350685  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:18.350693  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:18.350756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:18.391943  433881 cri.go:89] found id: ""
	I0408 12:50:18.391974  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.391984  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:18.391990  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:18.392059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:18.433191  433881 cri.go:89] found id: ""
	I0408 12:50:18.433226  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.433237  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:18.433246  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:18.433310  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:18.471600  433881 cri.go:89] found id: ""
	I0408 12:50:18.471629  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.471641  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:18.471649  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:18.471737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:18.507180  433881 cri.go:89] found id: ""
	I0408 12:50:18.507219  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.507228  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:18.507242  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:18.507307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:18.553894  433881 cri.go:89] found id: ""
	I0408 12:50:18.553924  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.553939  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:18.553947  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:18.554013  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:18.593823  433881 cri.go:89] found id: ""
	I0408 12:50:18.593860  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.593870  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:18.593878  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:18.593934  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:18.636636  433881 cri.go:89] found id: ""
	I0408 12:50:18.636667  433881 logs.go:276] 0 containers: []
	W0408 12:50:18.636679  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:18.636692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:18.636709  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:18.690597  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:18.690640  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:18.706484  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:18.706537  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:18.795390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:18.795419  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:18.795434  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:18.873458  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:18.873518  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:21.420942  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:21.436200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:21.436262  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:21.473194  433881 cri.go:89] found id: ""
	I0408 12:50:21.473228  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.473237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:21.473244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:21.473297  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:21.510496  433881 cri.go:89] found id: ""
	I0408 12:50:21.510534  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.510547  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:21.510556  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:21.510618  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:21.550290  433881 cri.go:89] found id: ""
	I0408 12:50:21.550329  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.550337  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:21.550344  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:21.550399  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:21.586192  433881 cri.go:89] found id: ""
	I0408 12:50:21.586229  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.586241  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:21.586252  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:21.586316  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:17.930126  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:20.430210  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:22.107118  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.107812  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:23.217266  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:25.218476  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:21.645888  433881 cri.go:89] found id: ""
	I0408 12:50:21.645925  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.645937  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:21.645945  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:21.646012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:21.710384  433881 cri.go:89] found id: ""
	I0408 12:50:21.710416  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.710429  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:21.710437  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:21.710503  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:21.773423  433881 cri.go:89] found id: ""
	I0408 12:50:21.773458  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.773467  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:21.773473  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:21.773536  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:21.814353  433881 cri.go:89] found id: ""
	I0408 12:50:21.814389  433881 logs.go:276] 0 containers: []
	W0408 12:50:21.814401  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:21.814415  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:21.814437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:21.866744  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:21.866783  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:21.883577  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:21.883617  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:21.963339  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:21.963362  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:21.963379  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:22.044959  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:22.045017  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:24.589027  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:24.603707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:24.603797  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:24.648525  433881 cri.go:89] found id: ""
	I0408 12:50:24.648566  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.648579  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:24.648587  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:24.648656  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:24.693788  433881 cri.go:89] found id: ""
	I0408 12:50:24.693827  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.693840  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:24.693850  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:24.693925  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:24.734461  433881 cri.go:89] found id: ""
	I0408 12:50:24.734499  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.734507  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:24.734514  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:24.734578  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:24.781723  433881 cri.go:89] found id: ""
	I0408 12:50:24.781759  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.781772  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:24.781780  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:24.781850  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:24.823060  433881 cri.go:89] found id: ""
	I0408 12:50:24.823091  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.823101  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:24.823109  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:24.823195  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:24.858847  433881 cri.go:89] found id: ""
	I0408 12:50:24.858887  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.858899  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:24.858913  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:24.858968  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:24.899075  433881 cri.go:89] found id: ""
	I0408 12:50:24.899113  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.899125  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:24.899133  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:24.899216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:24.941839  433881 cri.go:89] found id: ""
	I0408 12:50:24.941876  433881 logs.go:276] 0 containers: []
	W0408 12:50:24.941886  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:24.941897  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:24.941911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:24.993358  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:24.993402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:25.010857  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:25.010892  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:25.098985  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:25.099017  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:25.099035  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:25.179115  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:25.179172  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:22.928804  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:24.930608  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:26.607216  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:28.608092  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.717812  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:30.218079  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:27.726080  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:27.740646  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:27.740739  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:27.781567  433881 cri.go:89] found id: ""
	I0408 12:50:27.781612  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.781623  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:27.781630  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:27.781696  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:27.823034  433881 cri.go:89] found id: ""
	I0408 12:50:27.823077  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.823090  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:27.823099  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:27.823174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:27.862738  433881 cri.go:89] found id: ""
	I0408 12:50:27.862797  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.862822  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:27.862832  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:27.862917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:27.905821  433881 cri.go:89] found id: ""
	I0408 12:50:27.905862  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.905874  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:27.905884  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:27.905954  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:27.949580  433881 cri.go:89] found id: ""
	I0408 12:50:27.949613  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.949625  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:27.949634  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:27.949721  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:27.989453  433881 cri.go:89] found id: ""
	I0408 12:50:27.989488  433881 logs.go:276] 0 containers: []
	W0408 12:50:27.989496  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:27.989502  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:27.989560  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:28.031983  433881 cri.go:89] found id: ""
	I0408 12:50:28.032015  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.032027  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:28.032035  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:28.032114  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:28.072851  433881 cri.go:89] found id: ""
	I0408 12:50:28.072884  433881 logs.go:276] 0 containers: []
	W0408 12:50:28.072895  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:28.072910  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:28.072927  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:28.116117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:28.116160  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:28.170098  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:28.170142  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:28.184820  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:28.184860  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:28.261324  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:28.261355  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:28.261384  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:30.837906  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:30.853871  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:30.853969  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:30.896197  433881 cri.go:89] found id: ""
	I0408 12:50:30.896228  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.896237  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:30.896244  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:30.896296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:30.938689  433881 cri.go:89] found id: ""
	I0408 12:50:30.938726  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.938740  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:30.938758  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:30.938840  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:30.980883  433881 cri.go:89] found id: ""
	I0408 12:50:30.980918  433881 logs.go:276] 0 containers: []
	W0408 12:50:30.980929  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:30.980937  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:30.981008  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:31.018262  433881 cri.go:89] found id: ""
	I0408 12:50:31.018291  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.018305  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:31.018314  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:31.018382  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:31.055397  433881 cri.go:89] found id: ""
	I0408 12:50:31.055430  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.055443  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:31.055452  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:31.055527  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:31.091476  433881 cri.go:89] found id: ""
	I0408 12:50:31.091511  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.091523  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:31.091531  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:31.091583  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:31.130285  433881 cri.go:89] found id: ""
	I0408 12:50:31.130326  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.130337  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:31.130345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:31.130419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:31.168196  433881 cri.go:89] found id: ""
	I0408 12:50:31.168227  433881 logs.go:276] 0 containers: []
	W0408 12:50:31.168236  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:31.168246  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:31.168258  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:31.220612  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:31.220652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:31.236718  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:31.236754  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:31.310550  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:31.310574  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:31.310588  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:31.387376  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:31.387420  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:27.429985  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:29.928718  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:31.106901  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.606293  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:32.717659  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.217468  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.932307  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:33.946664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:33.946754  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:33.991321  433881 cri.go:89] found id: ""
	I0408 12:50:33.991359  433881 logs.go:276] 0 containers: []
	W0408 12:50:33.991371  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:33.991381  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:33.991451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:34.033989  433881 cri.go:89] found id: ""
	I0408 12:50:34.034024  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.034034  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:34.034041  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:34.034125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:34.081140  433881 cri.go:89] found id: ""
	I0408 12:50:34.081183  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.081192  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:34.081199  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:34.081258  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:34.122332  433881 cri.go:89] found id: ""
	I0408 12:50:34.122365  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.122376  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:34.122384  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:34.122451  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:34.161307  433881 cri.go:89] found id: ""
	I0408 12:50:34.161353  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.161378  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:34.161387  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:34.161460  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:34.199664  433881 cri.go:89] found id: ""
	I0408 12:50:34.199715  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.199727  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:34.199736  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:34.199816  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:34.242044  433881 cri.go:89] found id: ""
	I0408 12:50:34.242077  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.242087  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:34.242094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:34.242159  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:34.277852  433881 cri.go:89] found id: ""
	I0408 12:50:34.277893  433881 logs.go:276] 0 containers: []
	W0408 12:50:34.277908  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:34.277920  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:34.277940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:34.329572  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:34.329614  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:34.343823  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:34.343854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:34.422625  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:34.422652  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:34.422670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:34.504605  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:34.504653  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:31.928982  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:33.929758  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:35.930610  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:36.110235  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:38.606389  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.217645  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:39.218104  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:37.050790  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:37.065111  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:37.065179  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:37.108541  433881 cri.go:89] found id: ""
	I0408 12:50:37.108573  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.108583  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:37.108590  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:37.108655  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:37.145207  433881 cri.go:89] found id: ""
	I0408 12:50:37.145241  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.145256  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:37.145264  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:37.145332  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:37.182788  433881 cri.go:89] found id: ""
	I0408 12:50:37.182823  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.182836  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:37.182844  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:37.182917  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:37.222780  433881 cri.go:89] found id: ""
	I0408 12:50:37.222804  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.222813  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:37.222819  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:37.222884  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:37.261653  433881 cri.go:89] found id: ""
	I0408 12:50:37.261703  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.261715  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:37.261725  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:37.261795  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:37.300613  433881 cri.go:89] found id: ""
	I0408 12:50:37.300642  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.300651  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:37.300657  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:37.300720  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:37.344252  433881 cri.go:89] found id: ""
	I0408 12:50:37.344289  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.344302  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:37.344311  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:37.344380  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:37.382644  433881 cri.go:89] found id: ""
	I0408 12:50:37.382682  433881 logs.go:276] 0 containers: []
	W0408 12:50:37.382695  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:37.382708  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:37.382725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:37.437205  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:37.437248  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:37.451772  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:37.451806  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:37.535578  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:37.535604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:37.535618  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:37.618315  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:37.618358  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.160025  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:40.173704  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:40.173770  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:40.212527  433881 cri.go:89] found id: ""
	I0408 12:50:40.212564  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.212576  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:40.212584  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:40.212648  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:40.250802  433881 cri.go:89] found id: ""
	I0408 12:50:40.250833  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.250841  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:40.250848  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:40.250910  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:40.292534  433881 cri.go:89] found id: ""
	I0408 12:50:40.292565  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.292576  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:40.292584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:40.292641  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:40.329973  433881 cri.go:89] found id: ""
	I0408 12:50:40.330004  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.330017  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:40.330027  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:40.330083  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:40.367351  433881 cri.go:89] found id: ""
	I0408 12:50:40.367381  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.367390  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:40.367397  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:40.367462  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:40.404499  433881 cri.go:89] found id: ""
	I0408 12:50:40.404535  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.404546  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:40.404556  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:40.404624  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:40.448208  433881 cri.go:89] found id: ""
	I0408 12:50:40.448244  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.448254  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:40.448263  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:40.448318  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:40.490191  433881 cri.go:89] found id: ""
	I0408 12:50:40.490225  433881 logs.go:276] 0 containers: []
	W0408 12:50:40.490235  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:40.490246  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:40.490262  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:40.507079  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:40.507119  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:40.584844  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:40.584880  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:40.584905  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:40.665416  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:40.665461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:40.710289  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:40.710331  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:38.429765  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.430575  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:40.607976  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.106175  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:45.107548  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:41.716953  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.717149  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:43.267848  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:43.283094  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:43.283192  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:43.321609  433881 cri.go:89] found id: ""
	I0408 12:50:43.321643  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.321655  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:43.321664  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:43.321732  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:43.361550  433881 cri.go:89] found id: ""
	I0408 12:50:43.361587  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.361599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:43.361608  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:43.361686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:43.398332  433881 cri.go:89] found id: ""
	I0408 12:50:43.398373  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.398386  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:43.398394  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:43.398463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:43.436808  433881 cri.go:89] found id: ""
	I0408 12:50:43.436836  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.436844  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:43.436850  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:43.436901  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:43.475475  433881 cri.go:89] found id: ""
	I0408 12:50:43.475512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.475524  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:43.475533  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:43.475600  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:43.515481  433881 cri.go:89] found id: ""
	I0408 12:50:43.515512  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.515521  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:43.515530  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:43.515599  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:43.555358  433881 cri.go:89] found id: ""
	I0408 12:50:43.555388  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.555410  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:43.555420  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:43.555476  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:43.590192  433881 cri.go:89] found id: ""
	I0408 12:50:43.590240  433881 logs.go:276] 0 containers: []
	W0408 12:50:43.590253  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:43.590265  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:43.590281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:43.643642  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:43.643699  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:43.659375  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:43.659405  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:43.739721  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:43.739743  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:43.739760  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:43.821107  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:43.821152  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:46.364937  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:46.378208  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:46.378295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:46.415217  433881 cri.go:89] found id: ""
	I0408 12:50:46.415251  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.415263  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:46.415272  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:46.415336  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:46.453886  433881 cri.go:89] found id: ""
	I0408 12:50:46.453921  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.453930  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:46.453936  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:46.453992  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:46.491443  433881 cri.go:89] found id: ""
	I0408 12:50:46.491475  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.491488  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:46.491496  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:46.491565  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:46.535815  433881 cri.go:89] found id: ""
	I0408 12:50:46.535845  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.535854  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:46.535860  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:46.535921  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:46.577704  433881 cri.go:89] found id: ""
	I0408 12:50:46.577814  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.577826  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:46.577835  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:46.577915  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:42.928908  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:44.929425  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:47.606676  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.608190  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.217528  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:48.717623  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:50.729538  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:46.624693  433881 cri.go:89] found id: ""
	I0408 12:50:46.624723  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.624731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:46.624738  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:46.624791  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:46.659410  433881 cri.go:89] found id: ""
	I0408 12:50:46.659462  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.659474  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:46.659482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:46.659547  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:46.694881  433881 cri.go:89] found id: ""
	I0408 12:50:46.694912  433881 logs.go:276] 0 containers: []
	W0408 12:50:46.694926  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:46.694937  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:46.694954  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:46.751416  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:46.751464  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:46.767739  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:46.767779  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:46.854021  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:46.854050  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:46.854066  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.937214  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:46.937252  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.479829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:49.494083  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:49.494156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:49.532518  433881 cri.go:89] found id: ""
	I0408 12:50:49.532555  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.532563  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:49.532569  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:49.532632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:49.571054  433881 cri.go:89] found id: ""
	I0408 12:50:49.571086  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.571111  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:49.571119  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:49.571199  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:49.607025  433881 cri.go:89] found id: ""
	I0408 12:50:49.607061  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.607071  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:49.607080  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:49.607156  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:49.646890  433881 cri.go:89] found id: ""
	I0408 12:50:49.646921  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.646930  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:49.646939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:49.647009  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:49.688671  433881 cri.go:89] found id: ""
	I0408 12:50:49.688707  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.688719  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:49.688728  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:49.688800  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:49.726687  433881 cri.go:89] found id: ""
	I0408 12:50:49.726724  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.726735  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:49.726741  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:49.726808  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:49.767311  433881 cri.go:89] found id: ""
	I0408 12:50:49.767344  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.767353  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:49.767360  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:49.767414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:49.803409  433881 cri.go:89] found id: ""
	I0408 12:50:49.803442  433881 logs.go:276] 0 containers: []
	W0408 12:50:49.803452  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:49.803463  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:49.803478  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:49.842738  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:49.842767  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:49.895264  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:49.895318  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:49.910300  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:49.910332  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:50.005994  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:50.006031  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:50.006048  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:46.929626  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:49.429810  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.106861  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.608143  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:53.217707  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:55.718120  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:52.589266  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:52.603202  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:52.603308  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:52.640493  433881 cri.go:89] found id: ""
	I0408 12:50:52.640525  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.640540  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:52.640550  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:52.640613  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:52.680230  433881 cri.go:89] found id: ""
	I0408 12:50:52.680271  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.680284  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:52.680293  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:52.680355  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:52.724048  433881 cri.go:89] found id: ""
	I0408 12:50:52.724084  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.724096  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:52.724104  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:52.724171  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:52.776926  433881 cri.go:89] found id: ""
	I0408 12:50:52.776960  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.776973  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:52.776982  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:52.777059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:52.814738  433881 cri.go:89] found id: ""
	I0408 12:50:52.814770  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.814781  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:52.814788  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:52.814842  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:52.854463  433881 cri.go:89] found id: ""
	I0408 12:50:52.854501  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.854511  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:52.854521  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:52.854591  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:52.896180  433881 cri.go:89] found id: ""
	I0408 12:50:52.896209  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.896218  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:52.896224  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:52.896279  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:52.931890  433881 cri.go:89] found id: ""
	I0408 12:50:52.931932  433881 logs.go:276] 0 containers: []
	W0408 12:50:52.931944  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:52.931956  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:52.931973  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:53.013345  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:53.013368  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:53.013385  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:53.092792  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:53.092834  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:53.142678  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:53.142713  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:53.196378  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:53.196429  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:55.713265  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:55.729253  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:55.729341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:55.772259  433881 cri.go:89] found id: ""
	I0408 12:50:55.772303  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.772317  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:55.772325  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:55.772398  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:55.816146  433881 cri.go:89] found id: ""
	I0408 12:50:55.816178  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.816188  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:55.816194  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:55.816247  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:55.857896  433881 cri.go:89] found id: ""
	I0408 12:50:55.857935  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.857947  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:55.857955  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:55.858025  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:55.896337  433881 cri.go:89] found id: ""
	I0408 12:50:55.896374  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.896386  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:55.896395  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:55.896463  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:55.936373  433881 cri.go:89] found id: ""
	I0408 12:50:55.936419  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.936430  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:55.936439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:55.936514  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:55.996751  433881 cri.go:89] found id: ""
	I0408 12:50:55.996782  433881 logs.go:276] 0 containers: []
	W0408 12:50:55.996793  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:55.996802  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:55.996866  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:56.038910  433881 cri.go:89] found id: ""
	I0408 12:50:56.038948  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.038956  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:56.038962  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:56.039018  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:56.078147  433881 cri.go:89] found id: ""
	I0408 12:50:56.078185  433881 logs.go:276] 0 containers: []
	W0408 12:50:56.078195  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:56.078206  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:56.078223  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:56.137679  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:56.137725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:56.153067  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:56.153101  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:56.242398  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:56.242422  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:56.242436  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:56.325353  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:56.325402  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:51.929891  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:54.430216  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:57.106572  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.108219  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.216315  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:00.218162  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:58.867789  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:50:58.881570  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:50:58.881640  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:50:58.918941  433881 cri.go:89] found id: ""
	I0408 12:50:58.918971  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.918980  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:50:58.918987  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:50:58.919041  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:50:58.956339  433881 cri.go:89] found id: ""
	I0408 12:50:58.956375  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.956387  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:50:58.956395  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:50:58.956448  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:50:58.998045  433881 cri.go:89] found id: ""
	I0408 12:50:58.998075  433881 logs.go:276] 0 containers: []
	W0408 12:50:58.998087  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:50:58.998113  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:50:58.998186  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:50:59.037694  433881 cri.go:89] found id: ""
	I0408 12:50:59.037724  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.037736  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:50:59.037744  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:50:59.037813  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:50:59.079404  433881 cri.go:89] found id: ""
	I0408 12:50:59.079436  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.079448  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:50:59.079458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:50:59.079525  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:50:59.117535  433881 cri.go:89] found id: ""
	I0408 12:50:59.117566  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.117585  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:50:59.117593  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:50:59.117661  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:50:59.163144  433881 cri.go:89] found id: ""
	I0408 12:50:59.163177  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.163190  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:50:59.163200  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:50:59.163295  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:50:59.201778  433881 cri.go:89] found id: ""
	I0408 12:50:59.201815  433881 logs.go:276] 0 containers: []
	W0408 12:50:59.201827  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:50:59.201840  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:50:59.201857  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:50:59.256688  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:50:59.256730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:50:59.272631  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:50:59.272670  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:50:59.345194  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:50:59.345219  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:50:59.345233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:50:59.420807  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:50:59.420873  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:50:56.931254  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:50:59.429578  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.606793  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.105581  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:02.218796  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:04.718232  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:01.966779  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:01.992790  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:01.992868  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:02.032532  433881 cri.go:89] found id: ""
	I0408 12:51:02.032580  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.032592  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:02.032603  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:02.032684  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:02.070377  433881 cri.go:89] found id: ""
	I0408 12:51:02.070405  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.070412  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:02.070418  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:02.070481  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:02.109543  433881 cri.go:89] found id: ""
	I0408 12:51:02.109569  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.109577  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:02.109584  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:02.109639  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:02.148009  433881 cri.go:89] found id: ""
	I0408 12:51:02.148049  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.148062  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:02.148078  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:02.148144  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:02.184318  433881 cri.go:89] found id: ""
	I0408 12:51:02.184351  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.184362  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:02.184371  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:02.184469  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:02.225491  433881 cri.go:89] found id: ""
	I0408 12:51:02.225534  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.225545  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:02.225554  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:02.225628  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:02.269401  433881 cri.go:89] found id: ""
	I0408 12:51:02.269439  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.269447  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:02.269454  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:02.269517  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:02.310153  433881 cri.go:89] found id: ""
	I0408 12:51:02.310189  433881 logs.go:276] 0 containers: []
	W0408 12:51:02.310197  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:02.310209  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:02.310224  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:02.326077  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:02.326111  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:02.402369  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:02.402394  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:02.402410  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:02.483819  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:02.483866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:02.527581  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:02.527628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:05.083167  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:05.097986  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:05.098063  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:05.139396  433881 cri.go:89] found id: ""
	I0408 12:51:05.139434  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.139446  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:05.139464  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:05.139568  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:05.176882  433881 cri.go:89] found id: ""
	I0408 12:51:05.176918  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.176931  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:05.176940  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:05.177012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:05.216426  433881 cri.go:89] found id: ""
	I0408 12:51:05.216459  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.216478  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:05.216486  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:05.216598  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:05.254724  433881 cri.go:89] found id: ""
	I0408 12:51:05.254754  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.254762  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:05.254768  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:05.254821  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:05.291361  433881 cri.go:89] found id: ""
	I0408 12:51:05.291388  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.291397  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:05.291403  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:05.291453  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:05.329102  433881 cri.go:89] found id: ""
	I0408 12:51:05.329134  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.329145  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:05.329152  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:05.329216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:05.368614  433881 cri.go:89] found id: ""
	I0408 12:51:05.368657  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.368666  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:05.368674  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:05.368727  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:05.412151  433881 cri.go:89] found id: ""
	I0408 12:51:05.412182  433881 logs.go:276] 0 containers: []
	W0408 12:51:05.412196  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:05.412208  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:05.412227  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:05.428329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:05.428364  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:05.509452  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:05.509481  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:05.509500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:05.586831  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:05.586882  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:05.636175  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:05.636213  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:01.929336  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:03.929754  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.429604  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:06.106159  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.608247  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:07.216779  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:09.217275  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:08.189786  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:08.205609  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:08.205686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:08.256556  433881 cri.go:89] found id: ""
	I0408 12:51:08.256586  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.256597  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:08.256607  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:08.256664  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:08.309126  433881 cri.go:89] found id: ""
	I0408 12:51:08.309163  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.309176  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:08.309184  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:08.309259  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:08.350669  433881 cri.go:89] found id: ""
	I0408 12:51:08.350699  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.350708  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:08.350716  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:08.350766  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:08.392122  433881 cri.go:89] found id: ""
	I0408 12:51:08.392156  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.392164  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:08.392171  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:08.392223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:08.435571  433881 cri.go:89] found id: ""
	I0408 12:51:08.435603  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.435616  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:08.435624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:08.435708  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.474285  433881 cri.go:89] found id: ""
	I0408 12:51:08.474322  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.474334  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:08.474345  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:08.474419  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:08.521060  433881 cri.go:89] found id: ""
	I0408 12:51:08.521101  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.521109  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:08.521116  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:08.521185  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:08.559967  433881 cri.go:89] found id: ""
	I0408 12:51:08.560013  433881 logs.go:276] 0 containers: []
	W0408 12:51:08.560026  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:08.560051  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:08.560068  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:08.614926  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:08.614966  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:08.639012  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:08.639059  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:08.755572  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:08.755604  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:08.755621  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:08.836005  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:08.836050  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:11.383048  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:11.397692  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:11.397763  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:11.439445  433881 cri.go:89] found id: ""
	I0408 12:51:11.439482  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.439494  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:11.439503  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:11.439558  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:11.478262  433881 cri.go:89] found id: ""
	I0408 12:51:11.478297  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.478309  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:11.478318  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:11.478392  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:11.518012  433881 cri.go:89] found id: ""
	I0408 12:51:11.518049  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.518063  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:11.518071  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:11.518137  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:11.557519  433881 cri.go:89] found id: ""
	I0408 12:51:11.557551  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.557563  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:11.557571  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:11.557644  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:11.595494  433881 cri.go:89] found id: ""
	I0408 12:51:11.595528  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.595541  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:11.595550  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:11.595622  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:08.929238  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:10.929862  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.107603  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.611978  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.218410  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:13.718498  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:11.635667  433881 cri.go:89] found id: ""
	I0408 12:51:11.635719  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.635731  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:11.635740  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:11.635806  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:11.675521  433881 cri.go:89] found id: ""
	I0408 12:51:11.675553  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.675562  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:11.675568  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:11.675623  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:11.720983  433881 cri.go:89] found id: ""
	I0408 12:51:11.721016  433881 logs.go:276] 0 containers: []
	W0408 12:51:11.721029  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:11.721041  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:11.721055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:11.775418  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:11.775462  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:11.790019  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:11.790061  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:11.867479  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:11.867512  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:11.867530  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:11.944546  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:11.944594  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:14.487829  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:14.501277  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:14.501356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:14.539996  433881 cri.go:89] found id: ""
	I0408 12:51:14.540031  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.540043  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:14.540054  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:14.540125  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:14.580611  433881 cri.go:89] found id: ""
	I0408 12:51:14.580646  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.580658  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:14.580667  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:14.580729  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:14.623459  433881 cri.go:89] found id: ""
	I0408 12:51:14.623497  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.623509  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:14.623518  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:14.623593  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:14.666904  433881 cri.go:89] found id: ""
	I0408 12:51:14.666944  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.666953  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:14.666959  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:14.667012  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:14.709136  433881 cri.go:89] found id: ""
	I0408 12:51:14.709169  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.709178  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:14.709183  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:14.709234  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:14.757342  433881 cri.go:89] found id: ""
	I0408 12:51:14.757377  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.757390  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:14.757402  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:14.757477  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:14.795210  433881 cri.go:89] found id: ""
	I0408 12:51:14.795249  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.795262  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:14.795271  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:14.795329  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:14.833782  433881 cri.go:89] found id: ""
	I0408 12:51:14.833813  433881 logs.go:276] 0 containers: []
	W0408 12:51:14.833821  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:14.833831  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:14.833843  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:14.892985  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:14.893030  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:14.909567  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:14.909615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:14.988447  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:14.988473  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:14.988494  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:15.068404  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:15.068446  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:12.931867  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:15.430299  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.106552  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.106622  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.108053  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:16.217595  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:18.217758  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.220115  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:17.617145  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:17.630439  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:17.630520  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:17.672814  433881 cri.go:89] found id: ""
	I0408 12:51:17.672845  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.672853  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:17.672860  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:17.672936  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:17.715344  433881 cri.go:89] found id: ""
	I0408 12:51:17.715378  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.715391  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:17.715399  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:17.715464  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:17.757246  433881 cri.go:89] found id: ""
	I0408 12:51:17.757283  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.757295  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:17.757304  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:17.757373  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:17.798201  433881 cri.go:89] found id: ""
	I0408 12:51:17.798236  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.798245  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:17.798250  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:17.798312  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:17.838243  433881 cri.go:89] found id: ""
	I0408 12:51:17.838280  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.838296  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:17.838305  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:17.838376  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:17.877394  433881 cri.go:89] found id: ""
	I0408 12:51:17.877433  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.877446  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:17.877455  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:17.877522  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:17.917513  433881 cri.go:89] found id: ""
	I0408 12:51:17.917546  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.917557  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:17.917564  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:17.917631  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:17.959806  433881 cri.go:89] found id: ""
	I0408 12:51:17.959841  433881 logs.go:276] 0 containers: []
	W0408 12:51:17.959854  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:17.959872  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:17.959888  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:17.974835  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:17.974866  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:18.051066  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:18.051096  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:18.051110  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:18.130246  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:18.130294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:18.177977  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:18.178009  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:20.732943  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:20.747177  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:20.747250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:20.793434  433881 cri.go:89] found id: ""
	I0408 12:51:20.793462  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.793472  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:20.793478  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:20.793554  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:20.830880  433881 cri.go:89] found id: ""
	I0408 12:51:20.830915  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.830925  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:20.830931  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:20.830986  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:20.865660  433881 cri.go:89] found id: ""
	I0408 12:51:20.865698  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.865710  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:20.865718  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:20.865787  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:20.905977  433881 cri.go:89] found id: ""
	I0408 12:51:20.906009  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.906018  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:20.906023  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:20.906078  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:20.949244  433881 cri.go:89] found id: ""
	I0408 12:51:20.949273  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.949281  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:20.949288  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:20.949346  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:20.987438  433881 cri.go:89] found id: ""
	I0408 12:51:20.987466  433881 logs.go:276] 0 containers: []
	W0408 12:51:20.987475  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:20.987482  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:20.987534  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:21.028061  433881 cri.go:89] found id: ""
	I0408 12:51:21.028106  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.028123  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:21.028130  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:21.028187  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:21.065115  433881 cri.go:89] found id: ""
	I0408 12:51:21.065147  433881 logs.go:276] 0 containers: []
	W0408 12:51:21.065160  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:21.065171  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:21.065186  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:21.142100  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:21.142143  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:21.186259  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:21.186294  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:21.242038  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:21.242085  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:21.257483  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:21.257526  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:21.336027  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:17.930896  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:20.430609  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.108741  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.605215  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:22.716480  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:24.720217  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:23.836494  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:23.850931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:23.851001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:23.889352  433881 cri.go:89] found id: ""
	I0408 12:51:23.889385  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.889397  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:23.889406  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:23.889467  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:23.925240  433881 cri.go:89] found id: ""
	I0408 12:51:23.925271  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.925280  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:23.925286  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:23.925341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:23.965369  433881 cri.go:89] found id: ""
	I0408 12:51:23.965398  433881 logs.go:276] 0 containers: []
	W0408 12:51:23.965410  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:23.965417  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:23.965478  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:24.004828  433881 cri.go:89] found id: ""
	I0408 12:51:24.004864  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.004875  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:24.004882  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:24.004955  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:24.046959  433881 cri.go:89] found id: ""
	I0408 12:51:24.046996  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.047013  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:24.047022  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:24.047104  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:24.085408  433881 cri.go:89] found id: ""
	I0408 12:51:24.085447  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.085459  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:24.085468  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:24.085533  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:24.124156  433881 cri.go:89] found id: ""
	I0408 12:51:24.124193  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.124205  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:24.124214  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:24.124280  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:24.159973  433881 cri.go:89] found id: ""
	I0408 12:51:24.160011  433881 logs.go:276] 0 containers: []
	W0408 12:51:24.160023  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:24.160037  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:24.160055  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:24.238998  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:24.239047  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:24.282401  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:24.282439  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:24.339279  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:24.339328  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:24.354927  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:24.354965  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:24.432192  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:22.929962  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:25.430340  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.605294  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:28.606623  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:27.218727  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.716524  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:26.932361  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:26.947709  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:26.947779  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:26.992251  433881 cri.go:89] found id: ""
	I0408 12:51:26.992282  433881 logs.go:276] 0 containers: []
	W0408 12:51:26.992290  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:26.992297  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:26.992366  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:27.033517  433881 cri.go:89] found id: ""
	I0408 12:51:27.033548  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.033560  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:27.033568  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:27.033635  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:27.072593  433881 cri.go:89] found id: ""
	I0408 12:51:27.072628  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.072641  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:27.072650  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:27.072726  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:27.115728  433881 cri.go:89] found id: ""
	I0408 12:51:27.115761  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.115771  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:27.115779  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:27.115846  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:27.154218  433881 cri.go:89] found id: ""
	I0408 12:51:27.154254  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.154266  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:27.154274  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:27.154341  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:27.193084  433881 cri.go:89] found id: ""
	I0408 12:51:27.193118  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.193134  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:27.193142  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:27.193216  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:27.233401  433881 cri.go:89] found id: ""
	I0408 12:51:27.233436  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.233449  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:27.233458  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:27.233524  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:27.274272  433881 cri.go:89] found id: ""
	I0408 12:51:27.274307  433881 logs.go:276] 0 containers: []
	W0408 12:51:27.274316  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:27.274325  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:27.274339  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:27.316918  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:27.316956  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:27.371970  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:27.372014  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.387640  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:27.387679  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:27.468583  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:27.468611  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:27.468628  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.049078  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:30.063661  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:30.063750  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:30.102000  433881 cri.go:89] found id: ""
	I0408 12:51:30.102031  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.102049  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:30.102058  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:30.102120  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:30.144972  433881 cri.go:89] found id: ""
	I0408 12:51:30.145001  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.145010  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:30.145017  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:30.145076  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:30.185179  433881 cri.go:89] found id: ""
	I0408 12:51:30.185250  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.185274  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:30.185284  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:30.185356  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:30.224138  433881 cri.go:89] found id: ""
	I0408 12:51:30.224169  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.224178  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:30.224185  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:30.224245  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:30.262754  433881 cri.go:89] found id: ""
	I0408 12:51:30.262788  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.262800  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:30.262809  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:30.262872  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:30.296574  433881 cri.go:89] found id: ""
	I0408 12:51:30.296608  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.296617  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:30.296624  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:30.296685  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:30.337619  433881 cri.go:89] found id: ""
	I0408 12:51:30.337653  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.337665  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:30.337672  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:30.337737  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:30.378808  433881 cri.go:89] found id: ""
	I0408 12:51:30.378837  433881 logs.go:276] 0 containers: []
	W0408 12:51:30.378849  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:30.378860  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:30.378876  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:30.462867  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:30.462895  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:30.462911  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:30.549824  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:30.549871  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:30.594270  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:30.594302  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:30.650199  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:30.650247  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:27.430647  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:29.929105  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:30.607227  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.106814  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.106890  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:31.716747  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.718962  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.166177  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:33.181168  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:33.181277  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:33.220931  433881 cri.go:89] found id: ""
	I0408 12:51:33.220960  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.220970  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:33.220976  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:33.221043  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:33.267118  433881 cri.go:89] found id: ""
	I0408 12:51:33.267155  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.267168  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:33.267177  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:33.267250  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:33.308486  433881 cri.go:89] found id: ""
	I0408 12:51:33.308522  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.308532  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:33.308540  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:33.308614  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:33.344735  433881 cri.go:89] found id: ""
	I0408 12:51:33.344773  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.344785  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:33.344793  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:33.344857  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:33.384130  433881 cri.go:89] found id: ""
	I0408 12:51:33.384162  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.384175  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:33.384184  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:33.384246  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:33.422187  433881 cri.go:89] found id: ""
	I0408 12:51:33.422224  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.422236  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:33.422244  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:33.422309  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:33.462281  433881 cri.go:89] found id: ""
	I0408 12:51:33.462310  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.462320  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:33.462326  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:33.462412  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:33.501273  433881 cri.go:89] found id: ""
	I0408 12:51:33.501304  433881 logs.go:276] 0 containers: []
	W0408 12:51:33.501315  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:33.501329  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:33.501347  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:33.573407  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:33.573435  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:33.573453  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:33.659573  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:33.659615  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:33.712568  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:33.712600  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:33.769457  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:33.769500  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.285759  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:36.302490  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:36.302576  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:36.341170  433881 cri.go:89] found id: ""
	I0408 12:51:36.341204  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.341218  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:36.341227  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:36.341296  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:36.380366  433881 cri.go:89] found id: ""
	I0408 12:51:36.380395  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.380403  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:36.380411  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:36.380485  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:36.428755  433881 cri.go:89] found id: ""
	I0408 12:51:36.428786  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.428795  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:36.428801  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:36.428852  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:36.473849  433881 cri.go:89] found id: ""
	I0408 12:51:36.473893  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.473921  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:36.473930  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:36.474001  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:36.513922  433881 cri.go:89] found id: ""
	I0408 12:51:36.513967  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.513980  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:36.513989  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:36.514059  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:36.557731  433881 cri.go:89] found id: ""
	I0408 12:51:36.557768  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.557777  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:36.557784  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:36.557861  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:36.601978  433881 cri.go:89] found id: ""
	I0408 12:51:36.602010  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.602020  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:36.602031  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:36.602099  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:31.930145  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:33.931893  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:35.932546  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:37.606783  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:39.607738  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.217708  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:38.717067  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.721801  433557 pod_ready.go:102] pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:36.645189  433881 cri.go:89] found id: ""
	I0408 12:51:36.645226  433881 logs.go:276] 0 containers: []
	W0408 12:51:36.645244  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:36.645257  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:36.645276  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:36.739293  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:36.739346  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:36.786962  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:36.787001  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:36.842456  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:36.842499  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:36.857848  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:36.857883  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:36.939227  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:39.440047  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:39.456206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:39.456304  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:39.497752  433881 cri.go:89] found id: ""
	I0408 12:51:39.497792  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.497804  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:39.497815  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:39.497882  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:39.536192  433881 cri.go:89] found id: ""
	I0408 12:51:39.536224  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.536237  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:39.536245  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:39.536315  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:39.573874  433881 cri.go:89] found id: ""
	I0408 12:51:39.573917  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.573932  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:39.573939  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:39.574004  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:39.614525  433881 cri.go:89] found id: ""
	I0408 12:51:39.614562  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.614577  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:39.614585  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:39.614651  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:39.654414  433881 cri.go:89] found id: ""
	I0408 12:51:39.654455  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.654467  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:39.654476  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:39.654549  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:39.691814  433881 cri.go:89] found id: ""
	I0408 12:51:39.691847  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.691860  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:39.691868  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:39.691939  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:39.735572  433881 cri.go:89] found id: ""
	I0408 12:51:39.735609  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.735622  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:39.735630  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:39.735707  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:39.778827  433881 cri.go:89] found id: ""
	I0408 12:51:39.778860  433881 logs.go:276] 0 containers: []
	W0408 12:51:39.778870  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:39.778881  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:39.778894  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:39.857861  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:39.857903  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:39.901597  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:39.901652  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:39.955660  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:39.955730  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:39.972424  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:39.972461  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:40.052884  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:38.429490  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:40.932035  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:42.106879  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:44.607134  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:41.210350  433557 pod_ready.go:81] duration metric: took 4m0.000311819s for pod "metrics-server-569cc877fc-dbb9b" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:41.210399  433557 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0408 12:51:41.210413  433557 pod_ready.go:38] duration metric: took 4m3.201150727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:41.210464  433557 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:51:41.210520  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:41.210591  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:41.269963  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:41.269999  433557 cri.go:89] found id: ""
	I0408 12:51:41.270010  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:41.270072  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.275411  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:41.275495  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:41.319478  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:41.319517  433557 cri.go:89] found id: ""
	I0408 12:51:41.319529  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:41.319590  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.329956  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:41.330045  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:41.380017  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:41.380049  433557 cri.go:89] found id: ""
	I0408 12:51:41.380061  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:41.380131  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.384973  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:41.385077  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:41.429757  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:41.429786  433557 cri.go:89] found id: ""
	I0408 12:51:41.429798  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:41.429863  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.435404  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:41.435488  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:41.484998  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:41.485031  433557 cri.go:89] found id: ""
	I0408 12:51:41.485042  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:41.485111  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.489802  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:41.489878  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:41.543982  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.544016  433557 cri.go:89] found id: ""
	I0408 12:51:41.544028  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:41.544096  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.548766  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:41.548836  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:41.588398  433557 cri.go:89] found id: ""
	I0408 12:51:41.588425  433557 logs.go:276] 0 containers: []
	W0408 12:51:41.588433  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:41.588439  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:41.588498  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:41.635748  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:41.635771  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:41.635775  433557 cri.go:89] found id: ""
	I0408 12:51:41.635782  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:41.635849  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.641800  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:41.646173  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:41.646206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:41.717189  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:41.717228  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:41.779618  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:41.779653  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:41.840050  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:41.840092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:41.855982  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:41.856016  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:42.016416  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:42.016455  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:42.085493  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:42.085538  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:42.132590  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:42.132626  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:42.642069  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:42.642125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:42.708516  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:42.708566  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:42.759072  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:42.759125  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:42.810189  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:42.810242  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:42.855931  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:42.855971  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.396658  433557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.414640  433557 api_server.go:72] duration metric: took 4m14.728700184s to wait for apiserver process to appear ...
	I0408 12:51:45.414671  433557 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:51:45.414714  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.414772  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.460983  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:45.461012  433557 cri.go:89] found id: ""
	I0408 12:51:45.461023  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:45.461102  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.466928  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.467037  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.516723  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:45.516746  433557 cri.go:89] found id: ""
	I0408 12:51:45.516755  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:45.516813  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.521315  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.521413  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.560838  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.560865  433557 cri.go:89] found id: ""
	I0408 12:51:45.560876  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:45.560926  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.565852  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.565937  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.610154  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:45.610175  433557 cri.go:89] found id: ""
	I0408 12:51:45.610183  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:45.610229  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.615014  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.615098  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.658261  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:45.658292  433557 cri.go:89] found id: ""
	I0408 12:51:45.658304  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:45.658367  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.663148  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.663242  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:45.708805  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.708838  433557 cri.go:89] found id: ""
	I0408 12:51:45.708850  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:45.708906  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.713733  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:45.713800  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:45.763432  433557 cri.go:89] found id: ""
	I0408 12:51:45.763465  433557 logs.go:276] 0 containers: []
	W0408 12:51:45.763477  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:45.763486  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:45.763555  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:45.808689  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:45.808711  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:45.808715  433557 cri.go:89] found id: ""
	I0408 12:51:45.808723  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:45.808782  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.813386  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:45.818556  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:45.818589  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:42.553021  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:42.569100  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:42.569174  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:42.612835  433881 cri.go:89] found id: ""
	I0408 12:51:42.612870  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.612882  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:42.612891  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:42.612965  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:42.653224  433881 cri.go:89] found id: ""
	I0408 12:51:42.653266  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.653276  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:42.653285  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:42.653351  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:42.703612  433881 cri.go:89] found id: ""
	I0408 12:51:42.703648  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.703658  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:42.703664  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:42.703756  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:42.749765  433881 cri.go:89] found id: ""
	I0408 12:51:42.749799  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.749810  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:42.749818  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:42.749894  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:42.794008  433881 cri.go:89] found id: ""
	I0408 12:51:42.794042  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.794054  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:42.794064  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:42.794132  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:42.838099  433881 cri.go:89] found id: ""
	I0408 12:51:42.838134  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.838146  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:42.838154  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:42.838223  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:42.883552  433881 cri.go:89] found id: ""
	I0408 12:51:42.883589  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.883602  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:42.883615  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:42.883712  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:42.922871  433881 cri.go:89] found id: ""
	I0408 12:51:42.922899  433881 logs.go:276] 0 containers: []
	W0408 12:51:42.922910  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:42.922922  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:42.922958  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:42.979842  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:42.979885  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:42.995164  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:42.995198  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:43.075880  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:43.075906  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:43.075940  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:43.164047  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:43.164113  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:45.733586  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:45.749054  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:45.749158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:45.793132  433881 cri.go:89] found id: ""
	I0408 12:51:45.793169  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.793181  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:45.793189  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:45.793257  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:45.834562  433881 cri.go:89] found id: ""
	I0408 12:51:45.834597  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.834608  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:45.834616  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:45.834686  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:45.876365  433881 cri.go:89] found id: ""
	I0408 12:51:45.876404  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.876415  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:45.876424  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:45.876489  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:45.926205  433881 cri.go:89] found id: ""
	I0408 12:51:45.926241  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.926254  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:45.926262  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:45.926331  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:45.969462  433881 cri.go:89] found id: ""
	I0408 12:51:45.969494  433881 logs.go:276] 0 containers: []
	W0408 12:51:45.969506  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:45.969513  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:45.969582  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:46.011980  433881 cri.go:89] found id: ""
	I0408 12:51:46.012008  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.012031  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:46.012040  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:46.012098  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:46.054484  433881 cri.go:89] found id: ""
	I0408 12:51:46.054522  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.054533  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:46.054542  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:46.054609  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:46.094438  433881 cri.go:89] found id: ""
	I0408 12:51:46.094468  433881 logs.go:276] 0 containers: []
	W0408 12:51:46.094477  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:46.094486  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.094503  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:46.186390  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:46.186415  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.186437  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.283200  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.283240  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:46.336507  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.336544  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.392178  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.392221  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:43.429577  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:45.431057  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:47.106109  433674 pod_ready.go:102] pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:48.599265  433674 pod_ready.go:81] duration metric: took 4m0.000260398s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" ...
	E0408 12:51:48.599302  433674 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-z2ztl" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:51:48.599335  433674 pod_ready.go:38] duration metric: took 4m13.995684279s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:51:48.599373  433674 kubeadm.go:591] duration metric: took 4m22.072516751s to restartPrimaryControlPlane
	W0408 12:51:48.599529  433674 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:48.599619  433674 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:45.864458  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:45.864503  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:45.907964  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:45.908000  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:45.980082  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:45.980123  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:46.041294  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:46.041330  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:46.102117  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:46.102171  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:46.188553  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:46.188583  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:46.234191  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:46.234229  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:46.281240  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:46.281273  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:46.721047  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:46.721092  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:46.781387  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:46.781429  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:46.797003  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:46.797043  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:46.917073  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:46.917109  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:49.481948  433557 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0408 12:51:49.488261  433557 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0408 12:51:49.489694  433557 api_server.go:141] control plane version: v1.30.0-rc.0
	I0408 12:51:49.489726  433557 api_server.go:131] duration metric: took 4.075047023s to wait for apiserver health ...
	I0408 12:51:49.489737  433557 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:51:49.489772  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:49.489845  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:49.535955  433557 cri.go:89] found id: "380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.535980  433557 cri.go:89] found id: ""
	I0408 12:51:49.535990  433557 logs.go:276] 1 containers: [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb]
	I0408 12:51:49.536052  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.543143  433557 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:49.543239  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.590041  433557 cri.go:89] found id: "31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:49.590075  433557 cri.go:89] found id: ""
	I0408 12:51:49.590087  433557 logs.go:276] 1 containers: [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f]
	I0408 12:51:49.590155  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.595726  433557 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.595803  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.645009  433557 cri.go:89] found id: "eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:49.645046  433557 cri.go:89] found id: ""
	I0408 12:51:49.645057  433557 logs.go:276] 1 containers: [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346]
	I0408 12:51:49.645110  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.650243  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.650329  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.693859  433557 cri.go:89] found id: "bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:49.693882  433557 cri.go:89] found id: ""
	I0408 12:51:49.693895  433557 logs.go:276] 1 containers: [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d]
	I0408 12:51:49.693972  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.699620  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.699709  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.755614  433557 cri.go:89] found id: "9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:49.755646  433557 cri.go:89] found id: ""
	I0408 12:51:49.755657  433557 logs.go:276] 1 containers: [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568]
	I0408 12:51:49.755739  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.761838  433557 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.761913  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.808919  433557 cri.go:89] found id: "76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:49.808950  433557 cri.go:89] found id: ""
	I0408 12:51:49.808961  433557 logs.go:276] 1 containers: [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7]
	I0408 12:51:49.809040  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.813965  433557 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.814046  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.859700  433557 cri.go:89] found id: ""
	I0408 12:51:49.859737  433557 logs.go:276] 0 containers: []
	W0408 12:51:49.859748  433557 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.859757  433557 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0408 12:51:49.859832  433557 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0408 12:51:49.908020  433557 cri.go:89] found id: "a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:49.908044  433557 cri.go:89] found id: "78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:49.908050  433557 cri.go:89] found id: ""
	I0408 12:51:49.908060  433557 logs.go:276] 2 containers: [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b]
	I0408 12:51:49.908129  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.913034  433557 ssh_runner.go:195] Run: which crictl
	I0408 12:51:49.919193  433557 logs.go:123] Gathering logs for kube-apiserver [380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb] ...
	I0408 12:51:49.919233  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380c451b3806ee20ecec11a64b6fc1ce4c296fb9397935b7aa88ac4ba19f36eb"
	I0408 12:51:49.984657  433557 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.984704  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:50.003487  433557 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:50.003526  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0408 12:51:50.139417  433557 logs.go:123] Gathering logs for etcd [31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f] ...
	I0408 12:51:50.139481  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31df11caa819e52525545fdd96251a453c923d5725e805be6dbb2efc326e421f"
	I0408 12:51:50.240166  433557 logs.go:123] Gathering logs for coredns [eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346] ...
	I0408 12:51:50.240206  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef06839046da71ecba8fa417603b3078b9cfa0e3c6e346fb921bab453f6a346"
	I0408 12:51:50.288776  433557 logs.go:123] Gathering logs for kube-scheduler [bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d] ...
	I0408 12:51:50.288823  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb1c9d0aa38890f39459936ae1321f7ffb1a56367dd534b63b829d3d564aa30d"
	I0408 12:51:50.339222  433557 logs.go:123] Gathering logs for kube-proxy [9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568] ...
	I0408 12:51:50.339252  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9afab6e492932522cb99d06c265504a0e2fc7c186d4f8ad5e706184d1a68b568"
	I0408 12:51:50.402263  433557 logs.go:123] Gathering logs for kube-controller-manager [76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7] ...
	I0408 12:51:50.402308  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76a18493a630c04ecc95c6b456332c091fc589727dcdf42a410050e0196589e7"
	I0408 12:51:50.461894  433557 logs.go:123] Gathering logs for storage-provisioner [a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76] ...
	I0408 12:51:50.461946  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6c1545f860a4e79aeb79a75b5541ee685687481867d23d95f5876136f720e76"
	I0408 12:51:50.507329  433557 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:50.507373  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:50.576851  433557 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:50.576894  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:48.908956  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:48.932321  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:51:48.932414  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:51:48.988509  433881 cri.go:89] found id: ""
	I0408 12:51:48.988542  433881 logs.go:276] 0 containers: []
	W0408 12:51:48.988554  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:51:48.988563  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:51:48.988632  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:51:49.026573  433881 cri.go:89] found id: ""
	I0408 12:51:49.026605  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.026613  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:51:49.026618  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:51:49.026681  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:51:49.072625  433881 cri.go:89] found id: ""
	I0408 12:51:49.072661  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.072675  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:51:49.072684  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:51:49.072748  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:51:49.120630  433881 cri.go:89] found id: ""
	I0408 12:51:49.120662  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.120674  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:51:49.120683  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:51:49.120743  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:51:49.169189  433881 cri.go:89] found id: ""
	I0408 12:51:49.169218  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.169231  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:51:49.169239  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:51:49.169307  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:51:49.216077  433881 cri.go:89] found id: ""
	I0408 12:51:49.216115  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.216128  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:51:49.216141  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:51:49.216209  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:51:49.258519  433881 cri.go:89] found id: ""
	I0408 12:51:49.258556  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.258568  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:51:49.258576  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:51:49.258658  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:51:49.298058  433881 cri.go:89] found id: ""
	I0408 12:51:49.298092  433881 logs.go:276] 0 containers: []
	W0408 12:51:49.298103  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:51:49.298117  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:51:49.298133  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:51:49.351961  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:51:49.352020  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0408 12:51:49.369774  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:51:49.369822  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:51:49.465570  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:51:49.465598  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:51:49.465616  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:51:49.551701  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:51:49.551753  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:47.932221  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.430702  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:50.947824  433557 logs.go:123] Gathering logs for container status ...
	I0408 12:51:50.947878  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:51:51.007034  433557 logs.go:123] Gathering logs for storage-provisioner [78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b] ...
	I0408 12:51:51.007084  433557 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78ee8679f8367ce4d36629c09e0a28b1fe1d6d6e254925c1deb6300e34819c3b"
	I0408 12:51:53.563768  433557 system_pods.go:59] 8 kube-system pods found
	I0408 12:51:53.563811  433557 system_pods.go:61] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.563818  433557 system_pods.go:61] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.563824  433557 system_pods.go:61] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.563829  433557 system_pods.go:61] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.563835  433557 system_pods.go:61] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.563840  433557 system_pods.go:61] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.563850  433557 system_pods.go:61] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.563857  433557 system_pods.go:61] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.563870  433557 system_pods.go:74] duration metric: took 4.074125222s to wait for pod list to return data ...
	I0408 12:51:53.563884  433557 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:51:53.566991  433557 default_sa.go:45] found service account: "default"
	I0408 12:51:53.567015  433557 default_sa.go:55] duration metric: took 3.122873ms for default service account to be created ...
	I0408 12:51:53.567024  433557 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:51:53.574517  433557 system_pods.go:86] 8 kube-system pods found
	I0408 12:51:53.574558  433557 system_pods.go:89] "coredns-7db6d8ff4d-ndz4x" [f33b7eb7-3553-4027-ac38-f3ee62cc67d5] Running
	I0408 12:51:53.574565  433557 system_pods.go:89] "etcd-no-preload-135234" [4abc7c48-9c69-441b-b112-f3fdf74e67f8] Running
	I0408 12:51:53.574570  433557 system_pods.go:89] "kube-apiserver-no-preload-135234" [fecc8199-84b4-49c5-bbb8-5c90c9537948] Running
	I0408 12:51:53.574575  433557 system_pods.go:89] "kube-controller-manager-no-preload-135234" [46223816-961d-4232-a868-3e1f25bb131d] Running
	I0408 12:51:53.574581  433557 system_pods.go:89] "kube-proxy-tr6td" [4e97a709-efb2-4d44-8f2e-b9e9fef5fb70] Running
	I0408 12:51:53.574587  433557 system_pods.go:89] "kube-scheduler-no-preload-135234" [55ece228-e160-4bea-b198-956a4d97f4d6] Running
	I0408 12:51:53.574598  433557 system_pods.go:89] "metrics-server-569cc877fc-dbb9b" [f435d865-85f3-4d32-bedf-c3bf053500fe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:51:53.574605  433557 system_pods.go:89] "storage-provisioner" [64374707-2bed-4656-a07a-38e950da5333] Running
	I0408 12:51:53.574616  433557 system_pods.go:126] duration metric: took 7.585497ms to wait for k8s-apps to be running ...
	I0408 12:51:53.574629  433557 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:51:53.574720  433557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:53.597605  433557 system_svc.go:56] duration metric: took 22.957663ms WaitForService to wait for kubelet
	I0408 12:51:53.597658  433557 kubeadm.go:576] duration metric: took 4m22.91172229s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:51:53.597683  433557 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:51:53.601940  433557 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:51:53.601992  433557 node_conditions.go:123] node cpu capacity is 2
	I0408 12:51:53.602009  433557 node_conditions.go:105] duration metric: took 4.320913ms to run NodePressure ...
	I0408 12:51:53.602028  433557 start.go:240] waiting for startup goroutines ...
	I0408 12:51:53.602040  433557 start.go:245] waiting for cluster config update ...
	I0408 12:51:53.602060  433557 start.go:254] writing updated cluster config ...
	I0408 12:51:53.602426  433557 ssh_runner.go:195] Run: rm -f paused
	I0408 12:51:53.660257  433557 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0408 12:51:53.662533  433557 out.go:177] * Done! kubectl is now configured to use "no-preload-135234" cluster and "default" namespace by default
	I0408 12:51:52.104186  433881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:51:52.125116  433881 kubeadm.go:591] duration metric: took 4m3.004969382s to restartPrimaryControlPlane
	W0408 12:51:52.125203  433881 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:51:52.125233  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:51:54.046318  433881 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.921055247s)
	I0408 12:51:54.046411  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:51:54.061948  433881 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:51:54.073014  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:51:54.083545  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:51:54.083566  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:51:54.083623  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:51:54.093457  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:51:54.093541  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:51:54.104924  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:51:54.114649  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:51:54.114733  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:51:54.125143  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.135209  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:51:54.135283  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:51:54.146586  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:51:54.157676  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:51:54.157740  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:51:54.168585  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:51:54.411949  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:51:52.434513  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:54.930343  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:57.432046  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:51:59.436031  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:01.930142  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:03.931249  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:06.429806  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:08.929311  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:10.929707  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:13.430287  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:15.430449  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:17.933664  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:20.428983  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:21.300307  433674 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.700649463s)
	I0408 12:52:21.300429  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:21.321628  433674 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:21.334359  433674 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:21.345697  433674 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:21.345755  433674 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:21.345804  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:52:21.356798  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:21.356868  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:21.368622  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:52:21.379589  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:21.379676  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:21.391211  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.401783  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:21.401874  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:21.413655  433674 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:52:21.424585  433674 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:21.424673  433674 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:21.436887  433674 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:21.495891  433674 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:21.496022  433674 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:21.667820  433674 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:21.667973  433674 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:21.668100  433674 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:21.904532  433674 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:21.906631  433674 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:21.906736  433674 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:21.906833  433674 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:21.906962  433674 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:21.907084  433674 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:21.907206  433674 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:21.907283  433674 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:21.907372  433674 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:21.907705  433674 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:21.908164  433674 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:21.908536  433674 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:21.908852  433674 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:21.908942  433674 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:22.096319  433674 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:22.286425  433674 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:22.442534  433674 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:22.542901  433674 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:22.959098  433674 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:22.959656  433674 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:22.962359  433674 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:22.965011  433674 out.go:204]   - Booting up control plane ...
	I0408 12:52:22.965148  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:22.965830  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:22.966718  433674 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:22.987425  433674 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:22.988618  433674 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:22.988690  433674 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:23.134634  433674 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:52:22.429735  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.431237  433439 pod_ready.go:102] pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace has status "Ready":"False"
	I0408 12:52:24.923026  433439 pod_ready.go:81] duration metric: took 4m0.000804438s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" ...
	E0408 12:52:24.923079  433439 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-44qbm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0408 12:52:24.923103  433439 pod_ready.go:38] duration metric: took 4m6.498748448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:24.923143  433439 kubeadm.go:591] duration metric: took 4m14.484131334s to restartPrimaryControlPlane
	W0408 12:52:24.923222  433439 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0408 12:52:24.923260  433439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:52:29.641484  433674 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.505486 seconds
	I0408 12:52:29.659612  433674 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:52:29.683882  433674 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:52:30.237806  433674 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:52:30.238135  433674 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-488947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:52:30.755095  433674 kubeadm.go:309] [bootstrap-token] Using token: kwhj7g.e2hm9yupaxknooep
	I0408 12:52:30.756904  433674 out.go:204]   - Configuring RBAC rules ...
	I0408 12:52:30.757044  433674 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:52:30.763322  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:52:30.776489  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:52:30.780180  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:52:30.784949  433674 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:52:30.789409  433674 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:52:30.810228  433674 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:52:31.071672  433674 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:52:31.180390  433674 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:52:31.180421  433674 kubeadm.go:309] 
	I0408 12:52:31.180493  433674 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:52:31.180504  433674 kubeadm.go:309] 
	I0408 12:52:31.180626  433674 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:52:31.180652  433674 kubeadm.go:309] 
	I0408 12:52:31.180682  433674 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:52:31.180758  433674 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:52:31.180823  433674 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:52:31.180835  433674 kubeadm.go:309] 
	I0408 12:52:31.180898  433674 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:52:31.180908  433674 kubeadm.go:309] 
	I0408 12:52:31.180967  433674 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:52:31.180978  433674 kubeadm.go:309] 
	I0408 12:52:31.181069  433674 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:52:31.181200  433674 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:52:31.181301  433674 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:52:31.181312  433674 kubeadm.go:309] 
	I0408 12:52:31.181446  433674 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:52:31.181564  433674 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:52:31.181577  433674 kubeadm.go:309] 
	I0408 12:52:31.181706  433674 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.181869  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:52:31.181923  433674 kubeadm.go:309] 	--control-plane 
	I0408 12:52:31.181933  433674 kubeadm.go:309] 
	I0408 12:52:31.182039  433674 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:52:31.182055  433674 kubeadm.go:309] 
	I0408 12:52:31.182167  433674 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kwhj7g.e2hm9yupaxknooep \
	I0408 12:52:31.182323  433674 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:52:31.182467  433674 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:52:31.182492  433674 cni.go:84] Creating CNI manager for ""
	I0408 12:52:31.182502  433674 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:52:31.184299  433674 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:52:31.185716  433674 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:52:31.217708  433674 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:52:31.277627  433674 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:52:31.277716  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:31.277740  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-488947 minikube.k8s.io/updated_at=2024_04_08T12_52_31_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=embed-certs-488947 minikube.k8s.io/primary=true
	I0408 12:52:31.591490  433674 ops.go:34] apiserver oom_adj: -16
	I0408 12:52:31.591651  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.092642  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:32.591845  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.092645  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:33.592585  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.092066  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:34.592232  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.091882  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:35.591794  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.091849  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:36.592616  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.091816  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:37.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.091756  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:38.592114  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.092524  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:39.591838  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.091853  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:40.591747  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.092421  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:41.592611  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.092369  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:42.592443  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.092638  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:43.592549  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.091831  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.592358  433674 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:52:44.799776  433674 kubeadm.go:1107] duration metric: took 13.522136387s to wait for elevateKubeSystemPrivileges
	W0408 12:52:44.799833  433674 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:52:44.799845  433674 kubeadm.go:393] duration metric: took 5m18.325910079s to StartCluster
	I0408 12:52:44.799870  433674 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.799981  433674 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:52:44.802396  433674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:52:44.802704  433674 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.159 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:52:44.804525  433674 out.go:177] * Verifying Kubernetes components...
	I0408 12:52:44.802776  433674 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:52:44.802921  433674 config.go:182] Loaded profile config "embed-certs-488947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:52:44.805724  433674 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 12:52:44.805735  433674 addons.go:69] Setting metrics-server=true in profile "embed-certs-488947"
	I0408 12:52:44.805751  433674 addons.go:69] Setting default-storageclass=true in profile "embed-certs-488947"
	I0408 12:52:44.805777  433674 addons.go:234] Setting addon metrics-server=true in "embed-certs-488947"
	W0408 12:52:44.805792  433674 addons.go:243] addon metrics-server should already be in state true
	I0408 12:52:44.805824  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805727  433674 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-488947"
	I0408 12:52:44.805869  433674 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-488947"
	W0408 12:52:44.805883  433674 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:52:44.805915  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.805834  433674 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-488947"
	I0408 12:52:44.806260  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806262  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806266  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.806286  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806288  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.806326  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.824170  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I0408 12:52:44.824862  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.825517  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.825547  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.826049  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.826714  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.826752  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.827345  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0408 12:52:44.827569  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0408 12:52:44.828195  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828218  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.828860  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.828892  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829023  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.829040  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.829499  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829541  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.829687  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.830201  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.830247  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.834128  433674 addons.go:234] Setting addon default-storageclass=true in "embed-certs-488947"
	W0408 12:52:44.834156  433674 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:52:44.834189  433674 host.go:66] Checking if "embed-certs-488947" exists ...
	I0408 12:52:44.834569  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.834611  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.845829  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0408 12:52:44.846556  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.847545  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.847571  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.848210  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.848478  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.850407  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.850783  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0408 12:52:44.853144  433674 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:52:44.851322  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.854214  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0408 12:52:44.855198  433674 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:44.855222  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:52:44.855245  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.855434  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.855766  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855797  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.855936  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.855956  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.856190  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856264  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.856382  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.856937  433674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:52:44.856973  433674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:52:44.857994  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.859623  433674 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:52:44.860991  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:52:44.861012  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:52:44.858778  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.861032  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.861051  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.861072  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.859293  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.861282  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.861617  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.861817  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.863813  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864274  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.864299  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.864483  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.864681  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.864846  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.865028  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:44.874355  433674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0408 12:52:44.874834  433674 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:52:44.875388  433674 main.go:141] libmachine: Using API Version  1
	I0408 12:52:44.875418  433674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:52:44.875775  433674 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:52:44.875967  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetState
	I0408 12:52:44.877519  433674 main.go:141] libmachine: (embed-certs-488947) Calling .DriverName
	I0408 12:52:44.877786  433674 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:44.877803  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:52:44.877818  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHHostname
	I0408 12:52:44.880463  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.880846  433674 main.go:141] libmachine: (embed-certs-488947) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:17", ip: ""} in network mk-embed-certs-488947: {Iface:virbr1 ExpiryTime:2024-04-08 13:47:11 +0000 UTC Type:0 Mac:52:54:00:f4:fc:17 Iaid: IPaddr:192.168.72.159 Prefix:24 Hostname:embed-certs-488947 Clientid:01:52:54:00:f4:fc:17}
	I0408 12:52:44.880874  433674 main.go:141] libmachine: (embed-certs-488947) DBG | domain embed-certs-488947 has defined IP address 192.168.72.159 and MAC address 52:54:00:f4:fc:17 in network mk-embed-certs-488947
	I0408 12:52:44.881040  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHPort
	I0408 12:52:44.881234  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHKeyPath
	I0408 12:52:44.881615  433674 main.go:141] libmachine: (embed-certs-488947) Calling .GetSSHUsername
	I0408 12:52:44.881753  433674 sshutil.go:53] new ssh client: &{IP:192.168.72.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/embed-certs-488947/id_rsa Username:docker}
	I0408 12:52:45.057304  433674 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:52:45.082702  433674 node_ready.go:35] waiting up to 6m0s for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.091955  433674 node_ready.go:49] node "embed-certs-488947" has status "Ready":"True"
	I0408 12:52:45.091994  433674 node_ready.go:38] duration metric: took 9.246027ms for node "embed-certs-488947" to be "Ready" ...
	I0408 12:52:45.092007  433674 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:45.101654  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:45.237037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:52:45.237068  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:52:45.238421  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:52:45.274088  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:52:45.295037  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:52:45.295078  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:52:45.397474  433674 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:45.397504  433674 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:52:45.431610  433674 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:52:46.375681  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.101541881s)
	I0408 12:52:46.375827  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.375862  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376204  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.376244  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.137166571s)
	I0408 12:52:46.376271  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.376291  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.376309  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376313  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.376336  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.376319  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.377184  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377205  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377613  433674 main.go:141] libmachine: (embed-certs-488947) DBG | Closing plugin on server side
	I0408 12:52:46.377680  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.377699  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.377709  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.377747  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.378168  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.378182  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.413325  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.413361  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.413757  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.413780  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.679538  433674 pod_ready.go:92] pod "coredns-76f75df574-4gdp4" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.679577  433674 pod_ready.go:81] duration metric: took 1.577895468s for pod "coredns-76f75df574-4gdp4" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.679596  433674 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760007  433674 pod_ready.go:92] pod "coredns-76f75df574-r5rxq" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.760043  433674 pod_ready.go:81] duration metric: took 80.437752ms for pod "coredns-76f75df574-r5rxq" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.760059  433674 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.803070  433674 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.371401052s)
	I0408 12:52:46.803136  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803150  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803496  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803519  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803530  433674 main.go:141] libmachine: Making call to close driver server
	I0408 12:52:46.803539  433674 main.go:141] libmachine: (embed-certs-488947) Calling .Close
	I0408 12:52:46.803846  433674 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:52:46.803862  433674 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:52:46.803882  433674 addons.go:470] Verifying addon metrics-server=true in "embed-certs-488947"
	I0408 12:52:46.806034  433674 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0408 12:52:46.804164  433674 pod_ready.go:92] pod "etcd-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.807597  433674 pod_ready.go:81] duration metric: took 47.521367ms for pod "etcd-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807622  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.807621  433674 addons.go:505] duration metric: took 2.004847213s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0408 12:52:46.827049  433674 pod_ready.go:92] pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.827075  433674 pod_ready.go:81] duration metric: took 19.440746ms for pod "kube-apiserver-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.827086  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848718  433674 pod_ready.go:92] pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:46.848759  433674 pod_ready.go:81] duration metric: took 21.664037ms for pod "kube-controller-manager-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:46.848775  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087350  433674 pod_ready.go:92] pod "kube-proxy-mqrtp" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.087387  433674 pod_ready.go:81] duration metric: took 238.602902ms for pod "kube-proxy-mqrtp" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.087403  433674 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486822  433674 pod_ready.go:92] pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace has status "Ready":"True"
	I0408 12:52:47.486863  433674 pod_ready.go:81] duration metric: took 399.44977ms for pod "kube-scheduler-embed-certs-488947" in "kube-system" namespace to be "Ready" ...
	I0408 12:52:47.486875  433674 pod_ready.go:38] duration metric: took 2.394853452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:52:47.486895  433674 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:52:47.486967  433674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:52:47.517426  433674 api_server.go:72] duration metric: took 2.714672176s to wait for apiserver process to appear ...
	I0408 12:52:47.517461  433674 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:52:47.517492  433674 api_server.go:253] Checking apiserver healthz at https://192.168.72.159:8443/healthz ...
	I0408 12:52:47.527074  433674 api_server.go:279] https://192.168.72.159:8443/healthz returned 200:
	ok
	I0408 12:52:47.528230  433674 api_server.go:141] control plane version: v1.29.3
	I0408 12:52:47.528285  433674 api_server.go:131] duration metric: took 10.815426ms to wait for apiserver health ...
	I0408 12:52:47.528296  433674 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:52:47.692054  433674 system_pods.go:59] 9 kube-system pods found
	I0408 12:52:47.692091  433674 system_pods.go:61] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:47.692096  433674 system_pods.go:61] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:47.692101  433674 system_pods.go:61] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:47.692105  433674 system_pods.go:61] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:47.692109  433674 system_pods.go:61] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:47.692112  433674 system_pods.go:61] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:47.692116  433674 system_pods.go:61] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:47.692123  433674 system_pods.go:61] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:47.692129  433674 system_pods.go:61] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:47.692137  433674 system_pods.go:74] duration metric: took 163.833038ms to wait for pod list to return data ...
	I0408 12:52:47.692146  433674 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:52:47.886668  433674 default_sa.go:45] found service account: "default"
	I0408 12:52:47.886695  433674 default_sa.go:55] duration metric: took 194.543392ms for default service account to be created ...
	I0408 12:52:47.886707  433674 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:52:48.090174  433674 system_pods.go:86] 9 kube-system pods found
	I0408 12:52:48.090212  433674 system_pods.go:89] "coredns-76f75df574-4gdp4" [a6d8a54f-673e-495d-a0f7-fb03ff7b447b] Running
	I0408 12:52:48.090217  433674 system_pods.go:89] "coredns-76f75df574-r5rxq" [d8b96604-1b62-462c-94b9-91d009b7f20e] Running
	I0408 12:52:48.090222  433674 system_pods.go:89] "etcd-embed-certs-488947" [b1248113-e7ed-413c-8ba8-7800f3b6d26a] Running
	I0408 12:52:48.090226  433674 system_pods.go:89] "kube-apiserver-embed-certs-488947" [dc20695f-6d9f-482d-95e0-fb2e939bc0b9] Running
	I0408 12:52:48.090232  433674 system_pods.go:89] "kube-controller-manager-embed-certs-488947" [3df50fcc-894f-4030-9cbc-66dfc680dad0] Running
	I0408 12:52:48.090236  433674 system_pods.go:89] "kube-proxy-mqrtp" [1035043f-eea0-4b45-a2df-18d477a54ae9] Running
	I0408 12:52:48.090240  433674 system_pods.go:89] "kube-scheduler-embed-certs-488947" [b4ab2e0c-6df4-490d-b332-b748e2809c64] Running
	I0408 12:52:48.090248  433674 system_pods.go:89] "metrics-server-57f55c9bc5-87ddx" [9e6f83bf-7954-4003-b66a-e62d52985947] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:52:48.090253  433674 system_pods.go:89] "storage-provisioner" [3ae5294d-2336-46b7-b2e8-25d6664d2c62] Running
	I0408 12:52:48.090260  433674 system_pods.go:126] duration metric: took 203.547421ms to wait for k8s-apps to be running ...
	I0408 12:52:48.090266  433674 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:52:48.090312  433674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:48.106285  433674 system_svc.go:56] duration metric: took 15.998172ms WaitForService to wait for kubelet
	I0408 12:52:48.106322  433674 kubeadm.go:576] duration metric: took 3.303579521s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:52:48.106345  433674 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:52:48.287351  433674 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:52:48.287381  433674 node_conditions.go:123] node cpu capacity is 2
	I0408 12:52:48.287392  433674 node_conditions.go:105] duration metric: took 181.042972ms to run NodePressure ...
	I0408 12:52:48.287403  433674 start.go:240] waiting for startup goroutines ...
	I0408 12:52:48.287410  433674 start.go:245] waiting for cluster config update ...
	I0408 12:52:48.287419  433674 start.go:254] writing updated cluster config ...
	I0408 12:52:48.287738  433674 ssh_runner.go:195] Run: rm -f paused
	I0408 12:52:48.341532  433674 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:52:48.343890  433674 out.go:177] * Done! kubectl is now configured to use "embed-certs-488947" cluster and "default" namespace by default
	I0408 12:52:57.475303  433439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.552015668s)
	I0408 12:52:57.475390  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:52:57.492800  433439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 12:52:57.507211  433439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:52:57.520174  433439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:52:57.520203  433439 kubeadm.go:156] found existing configuration files:
	
	I0408 12:52:57.520267  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0408 12:52:57.531854  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:52:57.531939  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:52:57.543764  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0408 12:52:57.555407  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:52:57.555479  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:52:57.569452  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.580478  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:52:57.580575  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:52:57.591819  433439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0408 12:52:57.602496  433439 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:52:57.602589  433439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:52:57.613811  433439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:52:57.669998  433439 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0408 12:52:57.670137  433439 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:52:57.830674  433439 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:52:57.830802  433439 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:52:57.830882  433439 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:52:58.090382  433439 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:52:58.092626  433439 out.go:204]   - Generating certificates and keys ...
	I0408 12:52:58.092733  433439 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:52:58.092809  433439 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:52:58.092906  433439 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:52:58.093027  433439 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:52:58.093130  433439 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:52:58.093202  433439 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:52:58.093547  433439 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:52:58.093941  433439 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:52:58.094342  433439 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:52:58.094708  433439 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:52:58.095077  433439 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:52:58.095159  433439 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:52:58.328890  433439 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:52:58.516475  433439 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 12:52:58.830765  433439 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:52:59.052737  433439 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:52:59.306668  433439 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:52:59.307305  433439 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:52:59.312102  433439 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:52:59.314983  433439 out.go:204]   - Booting up control plane ...
	I0408 12:52:59.315104  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:52:59.315191  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:52:59.315305  433439 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:52:59.334624  433439 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:52:59.335637  433439 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:52:59.335713  433439 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:52:59.486408  433439 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:05.490227  433439 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002996 seconds
	I0408 12:53:05.526221  433439 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 12:53:05.553758  433439 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 12:53:06.101116  433439 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 12:53:06.101340  433439 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-527454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 12:53:06.616939  433439 kubeadm.go:309] [bootstrap-token] Using token: oe56hb.uz3a0dd96vnry1w3
	I0408 12:53:06.618840  433439 out.go:204]   - Configuring RBAC rules ...
	I0408 12:53:06.619038  433439 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 12:53:06.625364  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 12:53:06.638696  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 12:53:06.643811  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 12:53:06.647895  433439 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 12:53:06.651857  433439 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 12:53:06.677056  433439 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 12:53:06.939588  433439 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0408 12:53:07.038633  433439 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0408 12:53:07.041464  433439 kubeadm.go:309] 
	I0408 12:53:07.041565  433439 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0408 12:53:07.041578  433439 kubeadm.go:309] 
	I0408 12:53:07.041680  433439 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0408 12:53:07.041699  433439 kubeadm.go:309] 
	I0408 12:53:07.041723  433439 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0408 12:53:07.041824  433439 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 12:53:07.041906  433439 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 12:53:07.041917  433439 kubeadm.go:309] 
	I0408 12:53:07.041988  433439 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0408 12:53:07.041998  433439 kubeadm.go:309] 
	I0408 12:53:07.042103  433439 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 12:53:07.042123  433439 kubeadm.go:309] 
	I0408 12:53:07.042168  433439 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0408 12:53:07.042253  433439 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 12:53:07.042351  433439 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 12:53:07.042361  433439 kubeadm.go:309] 
	I0408 12:53:07.042588  433439 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 12:53:07.042708  433439 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0408 12:53:07.042719  433439 kubeadm.go:309] 
	I0408 12:53:07.042823  433439 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.042959  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 \
	I0408 12:53:07.042994  433439 kubeadm.go:309] 	--control-plane 
	I0408 12:53:07.043003  433439 kubeadm.go:309] 
	I0408 12:53:07.043127  433439 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0408 12:53:07.043143  433439 kubeadm.go:309] 
	I0408 12:53:07.043253  433439 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token oe56hb.uz3a0dd96vnry1w3 \
	I0408 12:53:07.043400  433439 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:08b990fc5e6d52766664f711b308ea0c1b4f5d370fc4772dd127655cde505ed5 
	I0408 12:53:07.043583  433439 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:53:07.043608  433439 cni.go:84] Creating CNI manager for ""
	I0408 12:53:07.043620  433439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 12:53:07.045283  433439 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0408 12:53:07.046614  433439 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0408 12:53:07.074907  433439 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0408 12:53:07.107168  433439 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 12:53:07.107232  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.107256  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-527454 minikube.k8s.io/updated_at=2024_04_08T12_53_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=79360015bf1010bbd536c214414dd9fff4749517 minikube.k8s.io/name=default-k8s-diff-port-527454 minikube.k8s.io/primary=true
	I0408 12:53:07.208551  433439 ops.go:34] apiserver oom_adj: -16
	I0408 12:53:07.395206  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:07.896090  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.396097  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:08.896240  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.395654  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:09.895751  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.396242  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:10.896204  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.395766  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:11.895555  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.396014  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:12.896092  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.395507  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:13.895832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.395237  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:14.895333  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.396191  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:15.895561  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.395832  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:16.895785  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.395460  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:17.895320  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.395826  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:18.896002  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.396326  433439 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 12:53:19.514796  433439 kubeadm.go:1107] duration metric: took 12.407623504s to wait for elevateKubeSystemPrivileges
	W0408 12:53:19.514843  433439 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0408 12:53:19.514856  433439 kubeadm.go:393] duration metric: took 5m9.134867072s to StartCluster
	I0408 12:53:19.514882  433439 settings.go:142] acquiring lock: {Name:mk1f0c4072886a27f5ae224d97e4f8a6bc950eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.514981  433439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:53:19.516708  433439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18588-368424/kubeconfig: {Name:mk051e10641e02a40917b504128ef8f7dac3d481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 12:53:19.516988  433439 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0408 12:53:19.518597  433439 out.go:177] * Verifying Kubernetes components...
	I0408 12:53:19.517057  433439 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0408 12:53:19.517238  433439 config.go:182] Loaded profile config "default-k8s-diff-port-527454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:53:19.519990  433439 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520011  433439 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:19.520003  433439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0408 12:53:19.520052  433439 addons.go:243] addon metrics-server should already be in state true
	I0408 12:53:19.520095  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.519995  433439 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-527454"
	I0408 12:53:19.520161  433439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-527454"
	I0408 12:53:19.520043  433439 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.520247  433439 addons.go:243] addon storage-provisioner should already be in state true
	I0408 12:53:19.520274  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.520519  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520521  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520555  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520616  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.520639  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.520556  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.536637  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I0408 12:53:19.536896  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I0408 12:53:19.536997  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0408 12:53:19.537194  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537369  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537453  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.537748  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537772  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.537883  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.537895  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538210  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538262  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538352  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.538372  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538815  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.538818  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.538791  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.538875  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.539030  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.542211  433439 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-527454"
	W0408 12:53:19.542228  433439 addons.go:243] addon default-storageclass should already be in state true
	I0408 12:53:19.542252  433439 host.go:66] Checking if "default-k8s-diff-port-527454" exists ...
	I0408 12:53:19.542841  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.542871  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.556920  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36437
	I0408 12:53:19.557552  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0408 12:53:19.557712  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.557930  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.558468  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.558482  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.559174  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.559474  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.559852  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.559881  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.560358  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.561323  433439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:53:19.561357  433439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:53:19.561606  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.563808  433439 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0408 12:53:19.565205  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0408 12:53:19.565224  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0408 12:53:19.565252  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.565914  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I0408 12:53:19.566710  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.567503  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.567521  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.568270  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.568656  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.568664  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.569109  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.569136  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.569294  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.569451  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.569707  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.569894  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.570455  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.572243  433439 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 12:53:19.573764  433439 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:19.573784  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 12:53:19.573804  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.576844  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577310  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.577380  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.577547  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.577851  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.578009  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.578154  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.579402  433439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36423
	I0408 12:53:19.579860  433439 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:53:19.580428  433439 main.go:141] libmachine: Using API Version  1
	I0408 12:53:19.580448  433439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:53:19.581001  433439 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:53:19.581202  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetState
	I0408 12:53:19.582638  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .DriverName
	I0408 12:53:19.582913  433439 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:19.582929  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 12:53:19.582949  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHHostname
	I0408 12:53:19.585995  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586456  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ff:4b", ip: ""} in network mk-default-k8s-diff-port-527454: {Iface:virbr2 ExpiryTime:2024-04-08 13:47:52 +0000 UTC Type:0 Mac:52:54:00:43:ff:4b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:default-k8s-diff-port-527454 Clientid:01:52:54:00:43:ff:4b}
	I0408 12:53:19.586488  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | domain default-k8s-diff-port-527454 has defined IP address 192.168.50.7 and MAC address 52:54:00:43:ff:4b in network mk-default-k8s-diff-port-527454
	I0408 12:53:19.586665  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHPort
	I0408 12:53:19.586845  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHKeyPath
	I0408 12:53:19.586974  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .GetSSHUsername
	I0408 12:53:19.587077  433439 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/default-k8s-diff-port-527454/id_rsa Username:docker}
	I0408 12:53:19.782606  433439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 12:53:19.822413  433439 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833467  433439 node_ready.go:49] node "default-k8s-diff-port-527454" has status "Ready":"True"
	I0408 12:53:19.833493  433439 node_ready.go:38] duration metric: took 11.040127ms for node "default-k8s-diff-port-527454" to be "Ready" ...
	I0408 12:53:19.833503  433439 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:19.845052  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:19.990826  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 12:53:20.027800  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0408 12:53:20.027827  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0408 12:53:20.066661  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 12:53:20.168240  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0408 12:53:20.168271  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0408 12:53:20.327307  433439 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.327336  433439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0408 12:53:20.390128  433439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0408 12:53:20.455235  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455265  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455575  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455607  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.455618  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.455628  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.455912  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.455929  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.494751  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:20.494778  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:20.495103  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:20.495126  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:20.495132  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.454862  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.388156991s)
	I0408 12:53:21.454942  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.454956  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455313  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.455368  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455377  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455386  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.455395  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.455729  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.455753  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.455797  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.591677  433439 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.201496165s)
	I0408 12:53:21.591745  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.591760  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592133  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) DBG | Closing plugin on server side
	I0408 12:53:21.592145  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592183  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592199  433439 main.go:141] libmachine: Making call to close driver server
	I0408 12:53:21.592214  433439 main.go:141] libmachine: (default-k8s-diff-port-527454) Calling .Close
	I0408 12:53:21.592484  433439 main.go:141] libmachine: Successfully made call to close driver server
	I0408 12:53:21.592501  433439 main.go:141] libmachine: Making call to close connection to plugin binary
	I0408 12:53:21.592513  433439 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-527454"
	I0408 12:53:21.594462  433439 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0408 12:53:21.595731  433439 addons.go:505] duration metric: took 2.078676652s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0408 12:53:21.852741  433439 pod_ready.go:102] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"False"
	I0408 12:53:22.375241  433439 pod_ready.go:92] pod "coredns-76f75df574-7v2jc" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.375283  433439 pod_ready.go:81] duration metric: took 2.53020032s for pod "coredns-76f75df574-7v2jc" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.375298  433439 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.391968  433439 pod_ready.go:92] pod "coredns-76f75df574-z56lf" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.392003  433439 pod_ready.go:81] duration metric: took 16.695581ms for pod "coredns-76f75df574-z56lf" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.392018  433439 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398659  433439 pod_ready.go:92] pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.398688  433439 pod_ready.go:81] duration metric: took 6.657546ms for pod "etcd-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.398699  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407214  433439 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.407241  433439 pod_ready.go:81] duration metric: took 8.535246ms for pod "kube-apiserver-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.407252  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416605  433439 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.416632  433439 pod_ready.go:81] duration metric: took 9.374648ms for pod "kube-controller-manager-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.416644  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750191  433439 pod_ready.go:92] pod "kube-proxy-tlhff" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:22.750220  433439 pod_ready.go:81] duration metric: took 333.570363ms for pod "kube-proxy-tlhff" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:22.750231  433439 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.148980  433439 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace has status "Ready":"True"
	I0408 12:53:23.149009  433439 pod_ready.go:81] duration metric: took 398.771226ms for pod "kube-scheduler-default-k8s-diff-port-527454" in "kube-system" namespace to be "Ready" ...
	I0408 12:53:23.149018  433439 pod_ready.go:38] duration metric: took 3.315505787s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 12:53:23.149034  433439 api_server.go:52] waiting for apiserver process to appear ...
	I0408 12:53:23.149087  433439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:53:23.165120  433439 api_server.go:72] duration metric: took 3.648094543s to wait for apiserver process to appear ...
	I0408 12:53:23.165149  433439 api_server.go:88] waiting for apiserver healthz status ...
	I0408 12:53:23.165170  433439 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8444/healthz ...
	I0408 12:53:23.171016  433439 api_server.go:279] https://192.168.50.7:8444/healthz returned 200:
	ok
	I0408 12:53:23.172486  433439 api_server.go:141] control plane version: v1.29.3
	I0408 12:53:23.172510  433439 api_server.go:131] duration metric: took 7.354957ms to wait for apiserver health ...
	I0408 12:53:23.172518  433439 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 12:53:23.353807  433439 system_pods.go:59] 9 kube-system pods found
	I0408 12:53:23.353846  433439 system_pods.go:61] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.353853  433439 system_pods.go:61] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.353859  433439 system_pods.go:61] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.353866  433439 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.353874  433439 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.353879  433439 system_pods.go:61] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.353883  433439 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.353890  433439 system_pods.go:61] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.353896  433439 system_pods.go:61] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.353911  433439 system_pods.go:74] duration metric: took 181.386053ms to wait for pod list to return data ...
	I0408 12:53:23.353923  433439 default_sa.go:34] waiting for default service account to be created ...
	I0408 12:53:23.549663  433439 default_sa.go:45] found service account: "default"
	I0408 12:53:23.549702  433439 default_sa.go:55] duration metric: took 195.766529ms for default service account to be created ...
	I0408 12:53:23.549717  433439 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 12:53:23.755668  433439 system_pods.go:86] 9 kube-system pods found
	I0408 12:53:23.755729  433439 system_pods.go:89] "coredns-76f75df574-7v2jc" [0ff09cb3-5ab5-4c6c-96cb-473f3473b06d] Running
	I0408 12:53:23.755739  433439 system_pods.go:89] "coredns-76f75df574-z56lf" [132d7297-7ba4-4f7f-bef8-66c67b4ef8f2] Running
	I0408 12:53:23.755748  433439 system_pods.go:89] "etcd-default-k8s-diff-port-527454" [88ac42d8-29f0-4d26-8b73-3dc399b0e9f4] Running
	I0408 12:53:23.755755  433439 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-527454" [d335bc8f-66e5-48b1-8c67-7c665920f561] Running
	I0408 12:53:23.755761  433439 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-527454" [d41c6100-891f-4715-8d1f-502a8b52320d] Running
	I0408 12:53:23.755768  433439 system_pods.go:89] "kube-proxy-tlhff" [6365e5a8-345d-4e77-988c-1dcab7b21065] Running
	I0408 12:53:23.755774  433439 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-527454" [dadb3dbb-9e2b-44bf-83aa-eebe4663e6d4] Running
	I0408 12:53:23.755787  433439 system_pods.go:89] "metrics-server-57f55c9bc5-jqbmw" [f2c5e235-6807-4248-81ff-a5e49c8a753b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0408 12:53:23.755792  433439 system_pods.go:89] "storage-provisioner" [040c7a58-258b-4798-8fae-7dc42ce50cac] Running
	I0408 12:53:23.755805  433439 system_pods.go:126] duration metric: took 206.081481ms to wait for k8s-apps to be running ...
	I0408 12:53:23.755814  433439 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 12:53:23.755866  433439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:23.774910  433439 system_svc.go:56] duration metric: took 19.080727ms WaitForService to wait for kubelet
	I0408 12:53:23.774954  433439 kubeadm.go:576] duration metric: took 4.257931558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 12:53:23.774985  433439 node_conditions.go:102] verifying NodePressure condition ...
	I0408 12:53:23.949588  433439 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 12:53:23.949618  433439 node_conditions.go:123] node cpu capacity is 2
	I0408 12:53:23.949630  433439 node_conditions.go:105] duration metric: took 174.638826ms to run NodePressure ...
	I0408 12:53:23.949642  433439 start.go:240] waiting for startup goroutines ...
	I0408 12:53:23.949649  433439 start.go:245] waiting for cluster config update ...
	I0408 12:53:23.949659  433439 start.go:254] writing updated cluster config ...
	I0408 12:53:23.949929  433439 ssh_runner.go:195] Run: rm -f paused
	I0408 12:53:24.004633  433439 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0408 12:53:24.007640  433439 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-527454" cluster and "default" namespace by default
	I0408 12:53:50.506496  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:53:50.506736  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:53:50.508871  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:50.508975  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:50.509090  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:50.509248  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:50.509435  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:50.509546  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:50.511505  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:50.511616  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:50.511727  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:50.511838  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:50.511925  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:50.512024  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:50.512112  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:50.512228  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:50.512332  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:50.512442  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:50.512551  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:50.512608  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:50.512661  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:50.512714  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:50.512784  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:50.512866  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:50.512934  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:50.513078  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:50.513228  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:50.513285  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:50.513383  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:50.515207  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:50.515297  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:50.515380  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:50.515449  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:50.515522  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:50.515668  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:53:50.515756  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:53:50.515843  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516036  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516118  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516346  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516428  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516675  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.516747  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.516990  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517092  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:53:50.517336  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:53:50.517352  433881 kubeadm.go:309] 
	I0408 12:53:50.517402  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:53:50.517453  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:53:50.517463  433881 kubeadm.go:309] 
	I0408 12:53:50.517517  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:53:50.517572  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:53:50.517743  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:53:50.517757  433881 kubeadm.go:309] 
	I0408 12:53:50.517898  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:53:50.517949  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:53:50.517999  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:53:50.518014  433881 kubeadm.go:309] 
	I0408 12:53:50.518163  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:53:50.518286  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:53:50.518297  433881 kubeadm.go:309] 
	I0408 12:53:50.518448  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:53:50.518581  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:53:50.518686  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:53:50.518747  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:53:50.518781  433881 kubeadm.go:309] 
	W0408 12:53:50.518884  433881 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0408 12:53:50.518933  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0408 12:53:50.995302  433881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:53:51.011982  433881 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 12:53:51.022491  433881 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 12:53:51.022512  433881 kubeadm.go:156] found existing configuration files:
	
	I0408 12:53:51.022565  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 12:53:51.032994  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 12:53:51.033071  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 12:53:51.043529  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 12:53:51.053500  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 12:53:51.053580  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 12:53:51.063658  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.073397  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 12:53:51.073464  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 12:53:51.085243  433881 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 12:53:51.095094  433881 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 12:53:51.095165  433881 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 12:53:51.105549  433881 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 12:53:51.185596  433881 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0408 12:53:51.185706  433881 kubeadm.go:309] [preflight] Running pre-flight checks
	I0408 12:53:51.349502  433881 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 12:53:51.349661  433881 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 12:53:51.349805  433881 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0408 12:53:51.557584  433881 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 12:53:51.559567  433881 out.go:204]   - Generating certificates and keys ...
	I0408 12:53:51.559701  433881 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0408 12:53:51.559800  433881 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0408 12:53:51.559968  433881 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0408 12:53:51.560065  433881 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0408 12:53:51.560159  433881 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0408 12:53:51.560241  433881 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0408 12:53:51.560337  433881 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0408 12:53:51.560443  433881 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0408 12:53:51.560561  433881 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0408 12:53:51.560680  433881 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0408 12:53:51.560735  433881 kubeadm.go:309] [certs] Using the existing "sa" key
	I0408 12:53:51.560826  433881 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 12:53:51.727630  433881 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 12:53:51.895665  433881 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 12:53:52.087304  433881 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 12:53:52.187789  433881 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 12:53:52.213627  433881 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 12:53:52.213777  433881 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 12:53:52.213837  433881 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0408 12:53:52.384599  433881 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 12:53:52.386843  433881 out.go:204]   - Booting up control plane ...
	I0408 12:53:52.386992  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 12:53:52.389989  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 12:53:52.393527  433881 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 12:53:52.394471  433881 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 12:53:52.405071  433881 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0408 12:54:32.408240  433881 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0408 12:54:32.408440  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:32.408738  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:37.409255  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:37.409493  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:54:47.409946  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:54:47.410234  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:07.410503  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:07.410710  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.409536  433881 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0408 12:55:47.410032  433881 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0408 12:55:47.410062  433881 kubeadm.go:309] 
	I0408 12:55:47.410118  433881 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0408 12:55:47.410216  433881 kubeadm.go:309] 		timed out waiting for the condition
	I0408 12:55:47.410232  433881 kubeadm.go:309] 
	I0408 12:55:47.410278  433881 kubeadm.go:309] 	This error is likely caused by:
	I0408 12:55:47.410341  433881 kubeadm.go:309] 		- The kubelet is not running
	I0408 12:55:47.410503  433881 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0408 12:55:47.410515  433881 kubeadm.go:309] 
	I0408 12:55:47.410691  433881 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0408 12:55:47.410768  433881 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0408 12:55:47.410833  433881 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0408 12:55:47.410843  433881 kubeadm.go:309] 
	I0408 12:55:47.411002  433881 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0408 12:55:47.411092  433881 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0408 12:55:47.411099  433881 kubeadm.go:309] 
	I0408 12:55:47.411208  433881 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0408 12:55:47.411325  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0408 12:55:47.411415  433881 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0408 12:55:47.411523  433881 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0408 12:55:47.411534  433881 kubeadm.go:309] 
	I0408 12:55:47.413655  433881 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 12:55:47.413779  433881 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0408 12:55:47.413887  433881 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0408 12:55:47.414099  433881 kubeadm.go:393] duration metric: took 7m58.347147979s to StartCluster
	I0408 12:55:47.414206  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0408 12:55:47.414540  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0408 12:55:47.466864  433881 cri.go:89] found id: ""
	I0408 12:55:47.466899  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.466909  433881 logs.go:278] No container was found matching "kube-apiserver"
	I0408 12:55:47.466917  433881 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0408 12:55:47.466999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0408 12:55:47.505562  433881 cri.go:89] found id: ""
	I0408 12:55:47.505590  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.505599  433881 logs.go:278] No container was found matching "etcd"
	I0408 12:55:47.505606  433881 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0408 12:55:47.505663  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0408 12:55:47.545030  433881 cri.go:89] found id: ""
	I0408 12:55:47.545063  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.545075  433881 logs.go:278] No container was found matching "coredns"
	I0408 12:55:47.545086  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0408 12:55:47.545158  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0408 12:55:47.584650  433881 cri.go:89] found id: ""
	I0408 12:55:47.584685  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.584698  433881 logs.go:278] No container was found matching "kube-scheduler"
	I0408 12:55:47.584707  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0408 12:55:47.584775  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0408 12:55:47.624857  433881 cri.go:89] found id: ""
	I0408 12:55:47.624885  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.624893  433881 logs.go:278] No container was found matching "kube-proxy"
	I0408 12:55:47.624900  433881 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0408 12:55:47.624953  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0408 12:55:47.662872  433881 cri.go:89] found id: ""
	I0408 12:55:47.662910  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.662922  433881 logs.go:278] No container was found matching "kube-controller-manager"
	I0408 12:55:47.662931  433881 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0408 12:55:47.662999  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0408 12:55:47.702086  433881 cri.go:89] found id: ""
	I0408 12:55:47.702132  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.702142  433881 logs.go:278] No container was found matching "kindnet"
	I0408 12:55:47.702148  433881 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0408 12:55:47.702198  433881 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0408 12:55:47.754880  433881 cri.go:89] found id: ""
	I0408 12:55:47.754912  433881 logs.go:276] 0 containers: []
	W0408 12:55:47.754922  433881 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0408 12:55:47.754932  433881 logs.go:123] Gathering logs for describe nodes ...
	I0408 12:55:47.754946  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0408 12:55:47.839768  433881 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0408 12:55:47.839800  433881 logs.go:123] Gathering logs for CRI-O ...
	I0408 12:55:47.839817  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0408 12:55:47.947231  433881 logs.go:123] Gathering logs for container status ...
	I0408 12:55:47.947281  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0408 12:55:47.997692  433881 logs.go:123] Gathering logs for kubelet ...
	I0408 12:55:47.997725  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0408 12:55:48.050804  433881 logs.go:123] Gathering logs for dmesg ...
	I0408 12:55:48.050854  433881 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0408 12:55:48.067168  433881 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0408 12:55:48.067218  433881 out.go:239] * 
	W0408 12:55:48.067277  433881 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.067305  433881 out.go:239] * 
	W0408 12:55:48.068281  433881 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 12:55:48.072609  433881 out.go:177] 
	W0408 12:55:48.074039  433881 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0408 12:55:48.074112  433881 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0408 12:55:48.074174  433881 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0408 12:55:48.076570  433881 out.go:177] 
	
	
	==> CRI-O <==
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.593995791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581602593973131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=177e1cdf-3d08-4b47-9fcc-3247581f5d07 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.594810342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60c320d4-060c-4e99-b594-55243d468065 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.594867327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60c320d4-060c-4e99-b594-55243d468065 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.594904631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60c320d4-060c-4e99-b594-55243d468065 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.632160022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d26d97c-3df9-4501-836b-46bc4bfbb7e1 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.632332154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d26d97c-3df9-4501-836b-46bc4bfbb7e1 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.633981934Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ab4e3d9-bac6-4d86-bfc3-fb90d25ab822 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.634482095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581602634450337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ab4e3d9-bac6-4d86-bfc3-fb90d25ab822 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.635291846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ab82a29-c1f6-44d9-be33-27bb43f55888 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.635365953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ab82a29-c1f6-44d9-be33-27bb43f55888 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.635415480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2ab82a29-c1f6-44d9-be33-27bb43f55888 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.670432322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab9d8ede-2f46-416a-aea8-567c98203446 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.670625766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab9d8ede-2f46-416a-aea8-567c98203446 name=/runtime.v1.RuntimeService/Version
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.672484391Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30fcc0ce-afb6-44f9-a1e4-afab5e86a6fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.673014074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581602672969467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30fcc0ce-afb6-44f9-a1e4-afab5e86a6fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.673849928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c532a72-ff68-4fdd-9eaf-510f3d297ef3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.673929895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c532a72-ff68-4fdd-9eaf-510f3d297ef3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.673970173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2c532a72-ff68-4fdd-9eaf-510f3d297ef3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.707036901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14c3fa1a-ac0c-4900-9174-b7e68c1241ff name=/runtime.v1.RuntimeService/Version
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.707140721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14c3fa1a-ac0c-4900-9174-b7e68c1241ff name=/runtime.v1.RuntimeService/Version
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.711018426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4aec7b6c-1363-4668-846f-9cf94280dfc9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.711501616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712581602711466555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4aec7b6c-1363-4668-846f-9cf94280dfc9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.712315685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff9a2ecd-0f8c-4a4b-9e5d-6c9b81516508 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.712367728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff9a2ecd-0f8c-4a4b-9e5d-6c9b81516508 name=/runtime.v1.RuntimeService/ListContainers
	Apr 08 13:06:42 old-k8s-version-384148 crio[654]: time="2024-04-08 13:06:42.712406979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ff9a2ecd-0f8c-4a4b-9e5d-6c9b81516508 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 12:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056085] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.848078] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.261221] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.697800] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.947628] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.074394] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060792] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.180910] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.184499] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.345839] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +7.450294] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.068325] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.296156] systemd-fstab-generator[965]: Ignoring "noauto" option for root device
	[Apr 8 12:48] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 8 12:51] systemd-fstab-generator[4953]: Ignoring "noauto" option for root device
	[Apr 8 12:53] systemd-fstab-generator[5230]: Ignoring "noauto" option for root device
	[  +0.074857] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:06:42 up 19 min,  0 users,  load average: 0.00, 0.01, 0.04
	Linux old-k8s-version-384148 5.10.207 #1 SMP Wed Apr 3 13:16:09 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: net.(*Dialer).DialContext(0xc0001f22a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c4a420, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c22a60, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c4a420, 0x24, 0x60, 0x7f18d47320e8, 0x118, ...)
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: net/http.(*Transport).dial(0xc000684000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000c4a420, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: net/http.(*Transport).dialConn(0xc000684000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000358540, 0x5, 0xc000c4a420, 0x24, 0x0, 0xc0002c06c0, ...)
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: net/http.(*Transport).dialConnFor(0xc000684000, 0xc0005bfce0)
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: created by net/http.(*Transport).queueForDial
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: goroutine 171 [select]:
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0001c3a40, 0xc0006ef780, 0xc000c335c0, 0xc000c33560)
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]: created by net.(*netFD).connect
	Apr 08 13:06:39 old-k8s-version-384148 kubelet[6658]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 08 13:06:40 old-k8s-version-384148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 133.
	Apr 08 13:06:40 old-k8s-version-384148 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 08 13:06:40 old-k8s-version-384148 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 08 13:06:40 old-k8s-version-384148 kubelet[6667]: I0408 13:06:40.699033    6667 server.go:416] Version: v1.20.0
	Apr 08 13:06:40 old-k8s-version-384148 kubelet[6667]: I0408 13:06:40.699352    6667 server.go:837] Client rotation is on, will bootstrap in background
	Apr 08 13:06:40 old-k8s-version-384148 kubelet[6667]: I0408 13:06:40.703708    6667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 08 13:06:40 old-k8s-version-384148 kubelet[6667]: W0408 13:06:40.707501    6667 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 08 13:06:40 old-k8s-version-384148 kubelet[6667]: I0408 13:06:40.707803    6667 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 2 (259.692429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-384148" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (108.93s)

                                                
                                    

Test pass (257/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.29.3/json-events 12.69
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.19
18 TestDownloadOnly/v1.29.3/DeleteAll 0.15
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.30.0-rc.0/json-events 13.64
22 TestDownloadOnly/v1.30.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-rc.0/DeleteAll 0.15
28 TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.58
31 TestOffline 104.51
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 151.1
38 TestAddons/parallel/Registry 16.77
40 TestAddons/parallel/InspektorGadget 11.11
41 TestAddons/parallel/MetricsServer 6.83
42 TestAddons/parallel/HelmTiller 12.82
44 TestAddons/parallel/CSI 87.14
45 TestAddons/parallel/Headlamp 14.3
46 TestAddons/parallel/CloudSpanner 6.83
48 TestAddons/parallel/NvidiaDevicePlugin 6.98
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.13
54 TestCertOptions 48.46
55 TestCertExpiration 592.31
57 TestForceSystemdFlag 49.12
58 TestForceSystemdEnv 48.64
60 TestKVMDriverInstallOrUpdate 4.45
64 TestErrorSpam/setup 44.27
65 TestErrorSpam/start 0.39
66 TestErrorSpam/status 0.79
67 TestErrorSpam/pause 1.68
68 TestErrorSpam/unpause 1.75
69 TestErrorSpam/stop 5.5
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 59.11
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.94
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
81 TestFunctional/serial/CacheCmd/cache/add_local 2.22
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 38.41
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.61
92 TestFunctional/serial/LogsFileCmd 1.64
93 TestFunctional/serial/InvalidService 5.65
95 TestFunctional/parallel/ConfigCmd 0.4
96 TestFunctional/parallel/DashboardCmd 19.01
97 TestFunctional/parallel/DryRun 0.36
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.25
103 TestFunctional/parallel/ServiceCmdConnect 11.71
104 TestFunctional/parallel/AddonsCmd 0.19
105 TestFunctional/parallel/PersistentVolumeClaim 52.07
107 TestFunctional/parallel/SSHCmd 0.43
108 TestFunctional/parallel/CpCmd 1.86
109 TestFunctional/parallel/MySQL 36.56
110 TestFunctional/parallel/FileSync 0.3
111 TestFunctional/parallel/CertSync 1.46
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
119 TestFunctional/parallel/License 0.42
120 TestFunctional/parallel/Version/short 0.08
121 TestFunctional/parallel/Version/components 1.1
131 TestFunctional/parallel/ServiceCmd/DeployApp 10.25
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
133 TestFunctional/parallel/ProfileCmd/profile_list 0.36
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
135 TestFunctional/parallel/MountCmd/any-port 9.75
136 TestFunctional/parallel/ServiceCmd/List 0.37
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
138 TestFunctional/parallel/MountCmd/specific-port 2.13
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
140 TestFunctional/parallel/ServiceCmd/Format 0.45
141 TestFunctional/parallel/ServiceCmd/URL 0.38
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
147 TestFunctional/parallel/ImageCommands/ImageBuild 3.69
148 TestFunctional/parallel/ImageCommands/Setup 1.96
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.32
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.96
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.69
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.95
156 TestFunctional/parallel/ImageCommands/ImageRemove 1.26
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 9.41
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.48
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 235.91
166 TestMultiControlPlane/serial/DeployApp 6.77
167 TestMultiControlPlane/serial/PingHostFromPods 1.5
168 TestMultiControlPlane/serial/AddWorkerNode 49.11
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.58
171 TestMultiControlPlane/serial/CopyFile 14.46
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.54
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.42
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.42
180 TestMultiControlPlane/serial/RestartCluster 365.58
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
182 TestMultiControlPlane/serial/AddSecondaryNode 78.18
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.58
187 TestJSONOutput/start/Command 95.78
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.8
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.68
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.42
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.24
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 89.7
219 TestMountStart/serial/StartWithMountFirst 28.38
220 TestMountStart/serial/VerifyMountFirst 0.4
221 TestMountStart/serial/StartWithMountSecond 30.33
222 TestMountStart/serial/VerifyMountSecond 0.41
223 TestMountStart/serial/DeleteFirst 1.12
224 TestMountStart/serial/VerifyMountPostDelete 0.41
225 TestMountStart/serial/Stop 1.38
226 TestMountStart/serial/RestartStopped 22.57
227 TestMountStart/serial/VerifyMountPostStop 0.42
230 TestMultiNode/serial/FreshStart2Nodes 101.06
231 TestMultiNode/serial/DeployApp2Nodes 5.43
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 44.03
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 8.03
237 TestMultiNode/serial/StopNode 2.52
238 TestMultiNode/serial/StartAfterStop 30.59
240 TestMultiNode/serial/DeleteNode 2.44
242 TestMultiNode/serial/RestartMultiNode 169.52
243 TestMultiNode/serial/ValidateNameConflict 46.91
250 TestScheduledStopUnix 116.19
254 TestRunningBinaryUpgrade 243.16
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 98.94
261 TestNoKubernetes/serial/StartWithStopK8s 29.39
269 TestNetworkPlugins/group/false 3.8
273 TestNoKubernetes/serial/Start 44.59
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
275 TestNoKubernetes/serial/ProfileList 1.79
276 TestNoKubernetes/serial/Stop 1.6
277 TestNoKubernetes/serial/StartNoArgs 42.6
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
287 TestPause/serial/Start 99.83
288 TestStoppedBinaryUpgrade/Setup 2.31
289 TestStoppedBinaryUpgrade/Upgrade 106.2
290 TestPause/serial/SecondStartNoReconfiguration 50.71
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
292 TestNetworkPlugins/group/auto/Start 101.77
293 TestPause/serial/Pause 0.94
294 TestPause/serial/VerifyStatus 0.29
295 TestNetworkPlugins/group/kindnet/Start 74.34
296 TestPause/serial/Unpause 0.67
297 TestPause/serial/PauseAgain 0.88
298 TestPause/serial/DeletePaused 1.03
299 TestPause/serial/VerifyDeletedResources 0.25
300 TestNetworkPlugins/group/calico/Start 122.96
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
304 TestNetworkPlugins/group/auto/KubeletFlags 0.25
305 TestNetworkPlugins/group/auto/NetCatPod 11.26
306 TestNetworkPlugins/group/kindnet/DNS 0.17
307 TestNetworkPlugins/group/kindnet/Localhost 0.15
308 TestNetworkPlugins/group/kindnet/HairPin 0.16
309 TestNetworkPlugins/group/auto/DNS 0.17
310 TestNetworkPlugins/group/auto/Localhost 0.15
311 TestNetworkPlugins/group/auto/HairPin 0.15
312 TestNetworkPlugins/group/custom-flannel/Start 91.06
313 TestNetworkPlugins/group/bridge/Start 129.72
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.22
316 TestNetworkPlugins/group/calico/NetCatPod 11.23
317 TestNetworkPlugins/group/calico/DNS 0.22
318 TestNetworkPlugins/group/calico/Localhost 0.19
319 TestNetworkPlugins/group/calico/HairPin 0.2
320 TestNetworkPlugins/group/flannel/Start 93.89
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.36
323 TestNetworkPlugins/group/custom-flannel/DNS 0.2
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
326 TestNetworkPlugins/group/enable-default-cni/Start 100.97
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
328 TestNetworkPlugins/group/bridge/NetCatPod 10.28
329 TestNetworkPlugins/group/bridge/DNS 0.2
330 TestNetworkPlugins/group/bridge/Localhost 0.16
331 TestNetworkPlugins/group/bridge/HairPin 0.14
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
334 TestNetworkPlugins/group/flannel/NetCatPod 14.34
337 TestNetworkPlugins/group/flannel/DNS 0.21
338 TestNetworkPlugins/group/flannel/Localhost 0.2
339 TestNetworkPlugins/group/flannel/HairPin 0.2
341 TestStartStop/group/no-preload/serial/FirstStart 141.13
342 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
343 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
345 TestStartStop/group/embed-certs/serial/FirstStart 101.05
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.67
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
352 TestStartStop/group/no-preload/serial/DeployApp 9.29
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
355 TestStartStop/group/embed-certs/serial/DeployApp 10.3
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
358 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 682.63
366 TestStartStop/group/no-preload/serial/SecondStart 588.2
367 TestStartStop/group/embed-certs/serial/SecondStart 633.18
368 TestStartStop/group/old-k8s-version/serial/Stop 4.31
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
380 TestStartStop/group/newest-cni/serial/FirstStart 57.47
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.23
383 TestStartStop/group/newest-cni/serial/Stop 7.49
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
385 TestStartStop/group/newest-cni/serial/SecondStart 39.05
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/newest-cni/serial/Pause 2.55
x
+
TestDownloadOnly/v1.20.0/json-events (22.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-750624 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-750624 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.569497827s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-750624
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-750624: exit status 85 (80.507303ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-750624 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC |          |
	|         | -p download-only-750624        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:20:22
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:20:22.319358  375829 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:20:22.319512  375829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:20:22.319528  375829 out.go:304] Setting ErrFile to fd 2...
	I0408 11:20:22.319535  375829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:20:22.319757  375829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	W0408 11:20:22.319892  375829 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18588-368424/.minikube/config/config.json: open /home/jenkins/minikube-integration/18588-368424/.minikube/config/config.json: no such file or directory
	I0408 11:20:22.320474  375829 out.go:298] Setting JSON to true
	I0408 11:20:22.321519  375829 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3766,"bootTime":1712571457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:20:22.321596  375829 start.go:139] virtualization: kvm guest
	I0408 11:20:22.324183  375829 out.go:97] [download-only-750624] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	W0408 11:20:22.324318  375829 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball: no such file or directory
	I0408 11:20:22.324397  375829 notify.go:220] Checking for updates...
	I0408 11:20:22.325807  375829 out.go:169] MINIKUBE_LOCATION=18588
	I0408 11:20:22.327514  375829 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:20:22.329068  375829 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:20:22.330355  375829 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:20:22.331656  375829 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 11:20:22.334380  375829 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 11:20:22.334671  375829 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:20:22.368311  375829 out.go:97] Using the kvm2 driver based on user configuration
	I0408 11:20:22.368343  375829 start.go:297] selected driver: kvm2
	I0408 11:20:22.368351  375829 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:20:22.368709  375829 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:20:22.368808  375829 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:20:22.384941  375829 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:20:22.385012  375829 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:20:22.385530  375829 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 11:20:22.385689  375829 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 11:20:22.385769  375829 cni.go:84] Creating CNI manager for ""
	I0408 11:20:22.385783  375829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 11:20:22.385791  375829 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:20:22.385844  375829 start.go:340] cluster config:
	{Name:download-only-750624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-750624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:20:22.386036  375829 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:20:22.388147  375829 out.go:97] Downloading VM boot image ...
	I0408 11:20:22.388204  375829 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18588-368424/.minikube/cache/iso/amd64/minikube-v1.33.0-1712138767-18566-amd64.iso
	I0408 11:20:31.329615  375829 out.go:97] Starting "download-only-750624" primary control-plane node in "download-only-750624" cluster
	I0408 11:20:31.329684  375829 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 11:20:31.426078  375829 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0408 11:20:31.426114  375829 cache.go:56] Caching tarball of preloaded images
	I0408 11:20:31.426312  375829 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0408 11:20:31.428817  375829 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 11:20:31.428851  375829 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0408 11:20:31.530941  375829 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-750624 host does not exist
	  To start a cluster, run: "minikube start -p download-only-750624"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-750624
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (12.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-879549 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-879549 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.684762077s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (12.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-879549
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-879549: exit status 85 (193.943995ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-750624 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC |                     |
	|         | -p download-only-750624        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC | 08 Apr 24 11:20 UTC |
	| delete  | -p download-only-750624        | download-only-750624 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC | 08 Apr 24 11:20 UTC |
	| start   | -o=json --download-only        | download-only-879549 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC |                     |
	|         | -p download-only-879549        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:20:45
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:20:45.257116  376027 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:20:45.257267  376027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:20:45.257281  376027 out.go:304] Setting ErrFile to fd 2...
	I0408 11:20:45.257287  376027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:20:45.257500  376027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:20:45.258130  376027 out.go:298] Setting JSON to true
	I0408 11:20:45.259106  376027 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3789,"bootTime":1712571457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:20:45.259177  376027 start.go:139] virtualization: kvm guest
	I0408 11:20:45.261774  376027 out.go:97] [download-only-879549] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:20:45.263600  376027 out.go:169] MINIKUBE_LOCATION=18588
	I0408 11:20:45.261979  376027 notify.go:220] Checking for updates...
	I0408 11:20:45.266929  376027 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:20:45.268685  376027 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:20:45.270169  376027 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:20:45.271646  376027 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 11:20:45.274338  376027 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 11:20:45.274585  376027 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:20:45.307139  376027 out.go:97] Using the kvm2 driver based on user configuration
	I0408 11:20:45.307184  376027 start.go:297] selected driver: kvm2
	I0408 11:20:45.307190  376027 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:20:45.307571  376027 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:20:45.307710  376027 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:20:45.323421  376027 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:20:45.323540  376027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:20:45.324105  376027 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 11:20:45.324262  376027 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 11:20:45.324335  376027 cni.go:84] Creating CNI manager for ""
	I0408 11:20:45.324353  376027 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 11:20:45.324367  376027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:20:45.324429  376027 start.go:340] cluster config:
	{Name:download-only-879549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-879549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:20:45.324541  376027 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:20:45.326491  376027 out.go:97] Starting "download-only-879549" primary control-plane node in "download-only-879549" cluster
	I0408 11:20:45.326523  376027 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:20:45.424637  376027 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0408 11:20:45.424678  376027 cache.go:56] Caching tarball of preloaded images
	I0408 11:20:45.424877  376027 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0408 11:20:45.427051  376027 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0408 11:20:45.427089  376027 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 ...
	I0408 11:20:45.521067  376027 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6f4e94cb6232b24c3932ab20b1ee6dad -> /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-879549 host does not exist
	  To start a cluster, run: "minikube start -p download-only-879549"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-879549
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/json-events (13.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-531329 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-531329 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.642667359s)
--- PASS: TestDownloadOnly/v1.30.0-rc.0/json-events (13.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-531329
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-531329: exit status 85 (79.452893ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-750624 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC |                     |
	|         | -p download-only-750624           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC | 08 Apr 24 11:20 UTC |
	| delete  | -p download-only-750624           | download-only-750624 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC | 08 Apr 24 11:20 UTC |
	| start   | -o=json --download-only           | download-only-879549 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC |                     |
	|         | -p download-only-879549           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC | 08 Apr 24 11:20 UTC |
	| delete  | -p download-only-879549           | download-only-879549 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC | 08 Apr 24 11:20 UTC |
	| start   | -o=json --download-only           | download-only-531329 | jenkins | v1.33.0-beta.0 | 08 Apr 24 11:20 UTC |                     |
	|         | -p download-only-531329           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0 |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/08 11:20:58
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 11:20:58.419003  376227 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:20:58.419135  376227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:20:58.419150  376227 out.go:304] Setting ErrFile to fd 2...
	I0408 11:20:58.419158  376227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:20:58.419360  376227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:20:58.420043  376227 out.go:298] Setting JSON to true
	I0408 11:20:58.421138  376227 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3802,"bootTime":1712571457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:20:58.421226  376227 start.go:139] virtualization: kvm guest
	I0408 11:20:58.423581  376227 out.go:97] [download-only-531329] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:20:58.425552  376227 out.go:169] MINIKUBE_LOCATION=18588
	I0408 11:20:58.423775  376227 notify.go:220] Checking for updates...
	I0408 11:20:58.429052  376227 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:20:58.430632  376227 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:20:58.432306  376227 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:20:58.433790  376227 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0408 11:20:58.436427  376227 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 11:20:58.436684  376227 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:20:58.470123  376227 out.go:97] Using the kvm2 driver based on user configuration
	I0408 11:20:58.470169  376227 start.go:297] selected driver: kvm2
	I0408 11:20:58.470177  376227 start.go:901] validating driver "kvm2" against <nil>
	I0408 11:20:58.470654  376227 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:20:58.470782  376227 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18588-368424/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0408 11:20:58.486509  376227 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0408 11:20:58.486598  376227 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0408 11:20:58.487332  376227 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0408 11:20:58.487522  376227 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 11:20:58.487609  376227 cni.go:84] Creating CNI manager for ""
	I0408 11:20:58.487626  376227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0408 11:20:58.487638  376227 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 11:20:58.487730  376227 start.go:340] cluster config:
	{Name:download-only-531329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:download-only-531329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:20:58.487865  376227 iso.go:125] acquiring lock: {Name:mk905101cd088ffce857d9d63a12e311f1c90fe6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 11:20:58.489800  376227 out.go:97] Starting "download-only-531329" primary control-plane node in "download-only-531329" cluster
	I0408 11:20:58.489829  376227 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 11:20:58.587974  376227 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.0/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0408 11:20:58.588024  376227 cache.go:56] Caching tarball of preloaded images
	I0408 11:20:58.588229  376227 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0408 11:20:58.590283  376227 out.go:97] Downloading Kubernetes v1.30.0-rc.0 preload ...
	I0408 11:20:58.590319  376227 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0408 11:20:59.104198  376227 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.0/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8b7303f27cbc36bf6c5aef5b8609bfb -> /home/jenkins/minikube-integration/18588-368424/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-531329 host does not exist
	  To start a cluster, run: "minikube start -p download-only-531329"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-531329
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-068480 --alsologtostderr --binary-mirror http://127.0.0.1:42539 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-068480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-068480
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (104.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-085390 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-085390 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m43.473715154s)
helpers_test.go:175: Cleaning up "offline-crio-085390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-085390
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-085390: (1.039411452s)
--- PASS: TestOffline (104.51s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-825010
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-825010: exit status 85 (66.911503ms)

                                                
                                                
-- stdout --
	* Profile "addons-825010" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-825010"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-825010
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-825010: exit status 85 (68.079742ms)

                                                
                                                
-- stdout --
	* Profile "addons-825010" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-825010"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (151.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-825010 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-825010 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.103276516s)
--- PASS: TestAddons/Setup (151.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 25.467769ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qw4cl" [9f724ae4-733d-40e1-a150-764921001381] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019716361s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6sfpx" [7b943a40-c7a9-411f-911f-fb652b42547e] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005352729s
addons_test.go:340: (dbg) Run:  kubectl --context addons-825010 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-825010 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-825010 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.830075484s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 ip
2024/04/08 11:24:00 [DEBUG] GET http://192.168.39.221:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-59d99" [8d91aaa7-7fc6-40cc-9cc6-bec7dac0a81a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004932307s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-825010
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-825010: (6.106101186s)
--- PASS: TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.308886ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-zgtxw" [f4f27621-21f2-454c-82af-2b867ffac4e3] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005850207s
addons_test.go:415: (dbg) Run:  kubectl --context addons-825010 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.82s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.455693ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-2d2hj" [b3fa7f26-5133-4ed3-a287-d04e374d1484] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005042114s
addons_test.go:473: (dbg) Run:  kubectl --context addons-825010 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-825010 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.924639117s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (87.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.617291ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-825010 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-825010 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9a61f0da-43a3-4cd4-aab7-295e224c3b43] Pending
helpers_test.go:344: "task-pv-pod" [9a61f0da-43a3-4cd4-aab7-295e224c3b43] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9a61f0da-43a3-4cd4-aab7-295e224c3b43] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.005590754s
addons_test.go:584: (dbg) Run:  kubectl --context addons-825010 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-825010 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-825010 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-825010 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-825010 delete pod task-pv-pod: (1.352092256s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-825010 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-825010 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825010 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-825010 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7efc63a2-d902-4f53-961b-b836de05847d] Pending
helpers_test.go:344: "task-pv-pod-restore" [7efc63a2-d902-4f53-961b-b836de05847d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7efc63a2-d902-4f53-961b-b836de05847d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00424542s
addons_test.go:626: (dbg) Run:  kubectl --context addons-825010 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-825010 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-825010 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-825010 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.920334242s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-825010 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (87.14s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-825010 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-825010 --alsologtostderr -v=1: (1.295541275s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-qvbrm" [554a155a-0a39-4a68-ae77-ee5f6e56c84d] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-qvbrm" [554a155a-0a39-4a68-ae77-ee5f6e56c84d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-qvbrm" [554a155a-0a39-4a68-ae77-ee5f6e56c84d] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004626175s
--- PASS: TestAddons/parallel/Headlamp (14.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-j82sg" [cb85638b-7147-4e1f-a874-3ba3498506e0] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004893757s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-825010
--- PASS: TestAddons/parallel/CloudSpanner (6.83s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.98s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bh7lk" [112b1946-35f2-4c3c-ac13-d15c612bc3e9] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007043131s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-825010
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.98s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-grh7b" [b6e7f495-597b-4ef6-8f69-6ae1669946e1] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006933035s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-825010 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-825010 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (48.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-064378 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-064378 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.948367991s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-064378 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-064378 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-064378 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-064378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-064378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-064378: (1.031386001s)
--- PASS: TestCertOptions (48.46s)

                                                
                                    
x
+
TestCertExpiration (592.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-283523 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-283523 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (45.678572236s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-283523 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-283523 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (6m5.433306396s)
helpers_test.go:175: Cleaning up "cert-expiration-283523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-283523
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-283523: (1.195709958s)
--- PASS: TestCertExpiration (592.31s)

                                                
                                    
x
+
TestForceSystemdFlag (49.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-103801 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-103801 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.907105981s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-103801 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-103801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-103801
--- PASS: TestForceSystemdFlag (49.12s)

                                                
                                    
x
+
TestForceSystemdEnv (48.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-495725 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-495725 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.594292653s)
helpers_test.go:175: Cleaning up "force-systemd-env-495725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-495725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-495725: (1.048396159s)
--- PASS: TestForceSystemdEnv (48.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.45s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.45s)

                                                
                                    
x
+
TestErrorSpam/setup (44.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-637441 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-637441 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-637441 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-637441 --driver=kvm2  --container-runtime=crio: (44.270743434s)
--- PASS: TestErrorSpam/setup (44.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (5.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 stop: (2.303142378s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 stop: (1.864997419s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-637441 --log_dir /tmp/nospam-637441 stop: (1.335031906s)
--- PASS: TestErrorSpam/stop (5.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18588-368424/.minikube/files/etc/test/nested/copy/375817/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567858 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-567858 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (59.112859871s)
--- PASS: TestFunctional/serial/StartWithProxy (59.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567858 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-567858 --alsologtostderr -v=8: (36.935637571s)
functional_test.go:659: soft start took 36.936499863s for "functional-567858" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-567858 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 cache add registry.k8s.io/pause:3.1: (1.136019281s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 cache add registry.k8s.io/pause:3.3: (1.251191651s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 cache add registry.k8s.io/pause:latest: (1.141997594s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-567858 /tmp/TestFunctionalserialCacheCmdcacheadd_local508320213/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cache add minikube-local-cache-test:functional-567858
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 cache add minikube-local-cache-test:functional-567858: (1.792638435s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cache delete minikube-local-cache-test:functional-567858
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-567858
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (231.946079ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 kubectl -- --context functional-567858 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-567858 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567858 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-567858 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.41100784s)
functional_test.go:757: restart took 38.411168766s for "functional-567858" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-567858 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 logs: (1.606501628s)
--- PASS: TestFunctional/serial/LogsCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 logs --file /tmp/TestFunctionalserialLogsFileCmd267520451/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 logs --file /tmp/TestFunctionalserialLogsFileCmd267520451/001/logs.txt: (1.633985747s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.65s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-567858 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-567858
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-567858: exit status 115 (302.032885ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.170:30671 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-567858 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-567858 delete -f testdata/invalidsvc.yaml: (2.142302176s)
--- PASS: TestFunctional/serial/InvalidService (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 config get cpus: exit status 14 (65.742151ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 config get cpus: exit status 14 (57.643967ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-567858 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-567858 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 384543: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567858 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-567858 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (171.163202ms)

                                                
                                                
-- stdout --
	* [functional-567858] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:33:19.708337  384034 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:33:19.708505  384034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:33:19.708517  384034 out.go:304] Setting ErrFile to fd 2...
	I0408 11:33:19.708523  384034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:33:19.708738  384034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:33:19.709462  384034 out.go:298] Setting JSON to false
	I0408 11:33:19.710936  384034 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4543,"bootTime":1712571457,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:33:19.711034  384034 start.go:139] virtualization: kvm guest
	I0408 11:33:19.713279  384034 out.go:177] * [functional-567858] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 11:33:19.715092  384034 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:33:19.716419  384034 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:33:19.715147  384034 notify.go:220] Checking for updates...
	I0408 11:33:19.717810  384034 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:33:19.719267  384034 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:33:19.720649  384034 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:33:19.723867  384034 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:33:19.725866  384034 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:33:19.726483  384034 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:33:19.726550  384034 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:33:19.743213  384034 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I0408 11:33:19.743683  384034 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:33:19.744393  384034 main.go:141] libmachine: Using API Version  1
	I0408 11:33:19.744416  384034 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:33:19.744936  384034 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:33:19.745174  384034 main.go:141] libmachine: (functional-567858) Calling .DriverName
	I0408 11:33:19.745525  384034 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:33:19.745990  384034 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:33:19.746043  384034 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:33:19.762429  384034 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0408 11:33:19.763003  384034 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:33:19.763574  384034 main.go:141] libmachine: Using API Version  1
	I0408 11:33:19.763597  384034 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:33:19.763949  384034 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:33:19.764153  384034 main.go:141] libmachine: (functional-567858) Calling .DriverName
	I0408 11:33:19.801608  384034 out.go:177] * Using the kvm2 driver based on existing profile
	I0408 11:33:19.802790  384034 start.go:297] selected driver: kvm2
	I0408 11:33:19.802808  384034 start.go:901] validating driver "kvm2" against &{Name:functional-567858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-567858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:33:19.802977  384034 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:33:19.805400  384034 out.go:177] 
	W0408 11:33:19.806559  384034 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0408 11:33:19.807724  384034 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567858 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567858 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-567858 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (164.28434ms)

                                                
                                                
-- stdout --
	* [functional-567858] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 11:33:19.535382  384005 out.go:291] Setting OutFile to fd 1 ...
	I0408 11:33:19.535500  384005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:33:19.535511  384005 out.go:304] Setting ErrFile to fd 2...
	I0408 11:33:19.535516  384005 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 11:33:19.535878  384005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 11:33:19.536445  384005 out.go:298] Setting JSON to false
	I0408 11:33:19.537508  384005 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4543,"bootTime":1712571457,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 11:33:19.537576  384005 start.go:139] virtualization: kvm guest
	I0408 11:33:19.540006  384005 out.go:177] * [functional-567858] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0408 11:33:19.541554  384005 notify.go:220] Checking for updates...
	I0408 11:33:19.541564  384005 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 11:33:19.542905  384005 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 11:33:19.544345  384005 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 11:33:19.545835  384005 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 11:33:19.547280  384005 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 11:33:19.548760  384005 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 11:33:19.550957  384005 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 11:33:19.551401  384005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:33:19.551478  384005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:33:19.570085  384005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0408 11:33:19.570512  384005 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:33:19.571219  384005 main.go:141] libmachine: Using API Version  1
	I0408 11:33:19.571244  384005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:33:19.571586  384005 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:33:19.571842  384005 main.go:141] libmachine: (functional-567858) Calling .DriverName
	I0408 11:33:19.572094  384005 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 11:33:19.572411  384005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 11:33:19.572458  384005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 11:33:19.587765  384005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0408 11:33:19.588434  384005 main.go:141] libmachine: () Calling .GetVersion
	I0408 11:33:19.590259  384005 main.go:141] libmachine: Using API Version  1
	I0408 11:33:19.590293  384005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 11:33:19.590695  384005 main.go:141] libmachine: () Calling .GetMachineName
	I0408 11:33:19.590988  384005 main.go:141] libmachine: (functional-567858) Calling .DriverName
	I0408 11:33:19.629177  384005 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0408 11:33:19.630495  384005 start.go:297] selected driver: kvm2
	I0408 11:33:19.630521  384005 start.go:901] validating driver "kvm2" against &{Name:functional-567858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18566/minikube-v1.33.0-1712138767-18566-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1712138838-18566@sha256:a1b6bbc384c0914baa698cc91ccedcb662b3c0986082ff16cc623c5d83216034 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-567858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 11:33:19.630739  384005 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 11:33:19.633409  384005 out.go:177] 
	W0408 11:33:19.634499  384005 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0408 11:33:19.635751  384005 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-567858 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-567858 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-2d8jj" [eedbad9b-4bd9-49cc-a964-c07f7a2f4aae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-2d8jj" [eedbad9b-4bd9-49cc-a964-c07f7a2f4aae] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005396059s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.170:31276
functional_test.go:1671: http://192.168.39.170:31276: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-2d8jj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.170:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.170:31276
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [164242ea-efa6-4ad7-80b1-45c00602e119] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004908908s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-567858 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-567858 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-567858 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-567858 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6132b39f-fe3c-405f-be93-dca634beb277] Pending
helpers_test.go:344: "sp-pod" [6132b39f-fe3c-405f-be93-dca634beb277] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6132b39f-fe3c-405f-be93-dca634beb277] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.005023866s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-567858 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-567858 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-567858 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [44059ffe-ea62-4f8f-a002-ddfba81a8203] Pending
helpers_test.go:344: "sp-pod" [44059ffe-ea62-4f8f-a002-ddfba81a8203] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [44059ffe-ea62-4f8f-a002-ddfba81a8203] Running
E0408 11:33:54.785143  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.006261257s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-567858 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh -n functional-567858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cp functional-567858:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3152636979/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh -n functional-567858 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh -n functional-567858 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-567858 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-qgj4j" [9af04b1f-9aa8-458e-8c3c-6a0f338825cc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-qgj4j" [9af04b1f-9aa8-458e-8c3c-6a0f338825cc] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.005054199s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-567858 exec mysql-859648c796-qgj4j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-567858 exec mysql-859648c796-qgj4j -- mysql -ppassword -e "show databases;": exit status 1 (146.147004ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-567858 exec mysql-859648c796-qgj4j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-567858 exec mysql-859648c796-qgj4j -- mysql -ppassword -e "show databases;": exit status 1 (144.838406ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-567858 exec mysql-859648c796-qgj4j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/375817/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo cat /etc/test/nested/copy/375817/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/375817.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo cat /etc/ssl/certs/375817.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/375817.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo cat /usr/share/ca-certificates/375817.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3758172.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo cat /etc/ssl/certs/3758172.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3758172.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo cat /usr/share/ca-certificates/3758172.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-567858 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 ssh "sudo systemctl is-active docker": exit status 1 (310.073812ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 ssh "sudo systemctl is-active containerd": exit status 1 (325.476685ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 version -o=json --components: (1.099948883s)
--- PASS: TestFunctional/parallel/Version/components (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-567858 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-567858 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-r8zp4" [76084400-af29-49fd-82e4-b0e4152745a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-r8zp4" [76084400-af29-49fd-82e4-b0e4152745a5] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004968679s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "287.18591ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "67.842893ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "237.00433ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "60.628641ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdany-port1701574551/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1712575988377933023" to /tmp/TestFunctionalparallelMountCmdany-port1701574551/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1712575988377933023" to /tmp/TestFunctionalparallelMountCmdany-port1701574551/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1712575988377933023" to /tmp/TestFunctionalparallelMountCmdany-port1701574551/001/test-1712575988377933023
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.173641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  8 11:33 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  8 11:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  8 11:33 test-1712575988377933023
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh cat /mount-9p/test-1712575988377933023
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-567858 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1e29b599-adfb-47db-b2e9-3b8422ad992b] Pending
helpers_test.go:344: "busybox-mount" [1e29b599-adfb-47db-b2e9-3b8422ad992b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1e29b599-adfb-47db-b2e9-3b8422ad992b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1e29b599-adfb-47db-b2e9-3b8422ad992b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004643528s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-567858 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdany-port1701574551/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 service list -o json
functional_test.go:1490: Took "373.22595ms" to run "out/minikube-linux-amd64 -p functional-567858 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdspecific-port3451488253/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.599501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdspecific-port3451488253/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 ssh "sudo umount -f /mount-9p": exit status 1 (284.747732ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-567858 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdspecific-port3451488253/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.170:30576
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.170:30576
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2388439593/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2388439593/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2388439593/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T" /mount1: exit status 1 (348.20375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-567858 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2388439593/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2388439593/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567858 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2388439593/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567858 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-567858
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-567858
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567858 image ls --format short --alsologtostderr:
I0408 11:33:51.894736  385439 out.go:291] Setting OutFile to fd 1 ...
I0408 11:33:51.895007  385439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:51.895016  385439 out.go:304] Setting ErrFile to fd 2...
I0408 11:33:51.895020  385439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:51.895194  385439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
I0408 11:33:51.895785  385439 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:51.895890  385439 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:51.896265  385439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:51.896328  385439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:51.911196  385439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
I0408 11:33:51.911760  385439 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:51.912429  385439 main.go:141] libmachine: Using API Version  1
I0408 11:33:51.912459  385439 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:51.912903  385439 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:51.913149  385439 main.go:141] libmachine: (functional-567858) Calling .GetState
I0408 11:33:51.915384  385439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:51.915450  385439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:51.931102  385439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
I0408 11:33:51.931678  385439 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:51.932278  385439 main.go:141] libmachine: Using API Version  1
I0408 11:33:51.932312  385439 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:51.932703  385439 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:51.932950  385439 main.go:141] libmachine: (functional-567858) Calling .DriverName
I0408 11:33:51.933210  385439 ssh_runner.go:195] Run: systemctl --version
I0408 11:33:51.933245  385439 main.go:141] libmachine: (functional-567858) Calling .GetSSHHostname
I0408 11:33:51.936586  385439 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:51.937004  385439 main.go:141] libmachine: (functional-567858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:12:fa", ip: ""} in network mk-functional-567858: {Iface:virbr1 ExpiryTime:2024-04-08 12:30:50 +0000 UTC Type:0 Mac:52:54:00:fa:12:fa Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-567858 Clientid:01:52:54:00:fa:12:fa}
I0408 11:33:51.937044  385439 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined IP address 192.168.39.170 and MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:51.937321  385439 main.go:141] libmachine: (functional-567858) Calling .GetSSHPort
I0408 11:33:51.937516  385439 main.go:141] libmachine: (functional-567858) Calling .GetSSHKeyPath
I0408 11:33:51.937675  385439 main.go:141] libmachine: (functional-567858) Calling .GetSSHUsername
I0408 11:33:51.937824  385439 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/functional-567858/id_rsa Username:docker}
I0408 11:33:52.032269  385439 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 11:33:52.137048  385439 main.go:141] libmachine: Making call to close driver server
I0408 11:33:52.137061  385439 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:52.137377  385439 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:52.137403  385439 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:33:52.137414  385439 main.go:141] libmachine: Making call to close driver server
I0408 11:33:52.137423  385439 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:52.137636  385439 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:52.137658  385439 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567858 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-567858  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 6052a25da3f97 | 123MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-567858  | f28ca4706347b | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 39f995c9f1996 | 129MB  |
| registry.k8s.io/kube-proxy              | v1.29.3            | a1d263b5dc5b0 | 83.6MB |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 8c390d98f50c0 | 60.7MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567858 image ls --format table --alsologtostderr:
I0408 11:33:52.464724  385551 out.go:291] Setting OutFile to fd 1 ...
I0408 11:33:52.464975  385551 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:52.464983  385551 out.go:304] Setting ErrFile to fd 2...
I0408 11:33:52.464988  385551 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:52.465181  385551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
I0408 11:33:52.465828  385551 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:52.465932  385551 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:52.466328  385551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:52.466387  385551 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:52.482302  385551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
I0408 11:33:52.482802  385551 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:52.483446  385551 main.go:141] libmachine: Using API Version  1
I0408 11:33:52.483475  385551 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:52.483937  385551 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:52.484180  385551 main.go:141] libmachine: (functional-567858) Calling .GetState
I0408 11:33:52.486231  385551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:52.486274  385551 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:52.501737  385551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
I0408 11:33:52.502228  385551 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:52.502713  385551 main.go:141] libmachine: Using API Version  1
I0408 11:33:52.502737  385551 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:52.503047  385551 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:52.503249  385551 main.go:141] libmachine: (functional-567858) Calling .DriverName
I0408 11:33:52.503520  385551 ssh_runner.go:195] Run: systemctl --version
I0408 11:33:52.503556  385551 main.go:141] libmachine: (functional-567858) Calling .GetSSHHostname
I0408 11:33:52.506419  385551 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:52.506909  385551 main.go:141] libmachine: (functional-567858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:12:fa", ip: ""} in network mk-functional-567858: {Iface:virbr1 ExpiryTime:2024-04-08 12:30:50 +0000 UTC Type:0 Mac:52:54:00:fa:12:fa Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-567858 Clientid:01:52:54:00:fa:12:fa}
I0408 11:33:52.506946  385551 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined IP address 192.168.39.170 and MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:52.507107  385551 main.go:141] libmachine: (functional-567858) Calling .GetSSHPort
I0408 11:33:52.507290  385551 main.go:141] libmachine: (functional-567858) Calling .GetSSHKeyPath
I0408 11:33:52.507470  385551 main.go:141] libmachine: (functional-567858) Calling .GetSSHUsername
I0408 11:33:52.507657  385551 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/functional-567858/id_rsa Username:docker}
I0408 11:33:52.587748  385551 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 11:33:52.643632  385551 main.go:141] libmachine: Making call to close driver server
I0408 11:33:52.643653  385551 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:52.644008  385551 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:52.644019  385551 main.go:141] libmachine: (functional-567858) DBG | Closing plugin on server side
I0408 11:33:52.644028  385551 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:33:52.644042  385551 main.go:141] libmachine: Making call to close driver server
I0408 11:33:52.644050  385551 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:52.644307  385551 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:52.644339  385551 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:33:52.644353  385551 main.go:141] libmachine: (functional-567858) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567858 image ls --format json --alsologtostderr:
[{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"128508878"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"123142962"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d6
5175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"f28ca4706347b73aee1caa999a90fec424326273cc573d6ead533881fdf474de","repoDigests":["localhost/minikube-local-cache-test@sha256:b90de0526a824e172eac8b98ca6b73bc69
1d9134e73bada54b59c132c86d3504"],"repoTags":["localhost/minikube-local-cache-test:functional-567858"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7
e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e0
8a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-567858"],"size":"34114467"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbc
c5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d","registry.k8s.io/kube-proxy@sha256:f
a87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"83634073"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a","registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"60724018"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567858 image ls --format json --alsologtostderr:
I0408 11:33:52.207549  385492 out.go:291] Setting OutFile to fd 1 ...
I0408 11:33:52.207667  385492 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:52.207672  385492 out.go:304] Setting ErrFile to fd 2...
I0408 11:33:52.207677  385492 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:52.207916  385492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
I0408 11:33:52.208512  385492 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:52.208622  385492 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:52.208992  385492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:52.209067  385492 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:52.226048  385492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
I0408 11:33:52.226591  385492 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:52.227242  385492 main.go:141] libmachine: Using API Version  1
I0408 11:33:52.227269  385492 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:52.227726  385492 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:52.227948  385492 main.go:141] libmachine: (functional-567858) Calling .GetState
I0408 11:33:52.230044  385492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:52.230091  385492 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:52.244833  385492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36727
I0408 11:33:52.245199  385492 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:52.245635  385492 main.go:141] libmachine: Using API Version  1
I0408 11:33:52.245652  385492 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:52.246124  385492 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:52.246345  385492 main.go:141] libmachine: (functional-567858) Calling .DriverName
I0408 11:33:52.246533  385492 ssh_runner.go:195] Run: systemctl --version
I0408 11:33:52.246557  385492 main.go:141] libmachine: (functional-567858) Calling .GetSSHHostname
I0408 11:33:52.249496  385492 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:52.249843  385492 main.go:141] libmachine: (functional-567858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:12:fa", ip: ""} in network mk-functional-567858: {Iface:virbr1 ExpiryTime:2024-04-08 12:30:50 +0000 UTC Type:0 Mac:52:54:00:fa:12:fa Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-567858 Clientid:01:52:54:00:fa:12:fa}
I0408 11:33:52.249864  385492 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined IP address 192.168.39.170 and MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:52.250040  385492 main.go:141] libmachine: (functional-567858) Calling .GetSSHPort
I0408 11:33:52.250214  385492 main.go:141] libmachine: (functional-567858) Calling .GetSSHKeyPath
I0408 11:33:52.250336  385492 main.go:141] libmachine: (functional-567858) Calling .GetSSHUsername
I0408 11:33:52.250463  385492 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/functional-567858/id_rsa Username:docker}
I0408 11:33:52.343550  385492 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 11:33:52.394623  385492 main.go:141] libmachine: Making call to close driver server
I0408 11:33:52.394652  385492 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:52.394936  385492 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:52.394956  385492 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:33:52.394983  385492 main.go:141] libmachine: (functional-567858) DBG | Closing plugin on server side
I0408 11:33:52.394989  385492 main.go:141] libmachine: Making call to close driver server
I0408 11:33:52.395000  385492 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:52.395278  385492 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:52.395292  385492 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567858 image ls --format yaml --alsologtostderr:
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "128508878"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: f28ca4706347b73aee1caa999a90fec424326273cc573d6ead533881fdf474de
repoDigests:
- localhost/minikube-local-cache-test@sha256:b90de0526a824e172eac8b98ca6b73bc691d9134e73bada54b59c132c86d3504
repoTags:
- localhost/minikube-local-cache-test:functional-567858
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "83634073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-567858
size: "34114467"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "123142962"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
- registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "60724018"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567858 image ls --format yaml --alsologtostderr:
I0408 11:33:51.897537  385440 out.go:291] Setting OutFile to fd 1 ...
I0408 11:33:51.897800  385440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:51.897811  385440 out.go:304] Setting ErrFile to fd 2...
I0408 11:33:51.897818  385440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:51.898009  385440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
I0408 11:33:51.898614  385440 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:51.898738  385440 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:51.899104  385440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:51.899167  385440 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:51.914714  385440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38415
I0408 11:33:51.915293  385440 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:51.915988  385440 main.go:141] libmachine: Using API Version  1
I0408 11:33:51.916021  385440 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:51.916419  385440 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:51.916666  385440 main.go:141] libmachine: (functional-567858) Calling .GetState
I0408 11:33:51.918676  385440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:51.918745  385440 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:51.934092  385440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
I0408 11:33:51.934512  385440 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:51.935097  385440 main.go:141] libmachine: Using API Version  1
I0408 11:33:51.935125  385440 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:51.935526  385440 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:51.935802  385440 main.go:141] libmachine: (functional-567858) Calling .DriverName
I0408 11:33:51.936013  385440 ssh_runner.go:195] Run: systemctl --version
I0408 11:33:51.936035  385440 main.go:141] libmachine: (functional-567858) Calling .GetSSHHostname
I0408 11:33:51.939316  385440 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:51.939730  385440 main.go:141] libmachine: (functional-567858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:12:fa", ip: ""} in network mk-functional-567858: {Iface:virbr1 ExpiryTime:2024-04-08 12:30:50 +0000 UTC Type:0 Mac:52:54:00:fa:12:fa Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-567858 Clientid:01:52:54:00:fa:12:fa}
I0408 11:33:51.939768  385440 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined IP address 192.168.39.170 and MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:51.939889  385440 main.go:141] libmachine: (functional-567858) Calling .GetSSHPort
I0408 11:33:51.940065  385440 main.go:141] libmachine: (functional-567858) Calling .GetSSHKeyPath
I0408 11:33:51.940240  385440 main.go:141] libmachine: (functional-567858) Calling .GetSSHUsername
I0408 11:33:51.940422  385440 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/functional-567858/id_rsa Username:docker}
I0408 11:33:52.026463  385440 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 11:33:52.128191  385440 main.go:141] libmachine: Making call to close driver server
I0408 11:33:52.128207  385440 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:52.128561  385440 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:52.128584  385440 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:33:52.128594  385440 main.go:141] libmachine: Making call to close driver server
I0408 11:33:52.128602  385440 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:52.128901  385440 main.go:141] libmachine: (functional-567858) DBG | Closing plugin on server side
I0408 11:33:52.128988  385440 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:52.129045  385440 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567858 ssh pgrep buildkitd: exit status 1 (226.077549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image build -t localhost/my-image:functional-567858 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 image build -t localhost/my-image:functional-567858 testdata/build --alsologtostderr: (3.192362923s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567858 image build -t localhost/my-image:functional-567858 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 03fb1b44961
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-567858
--> 7ad214c3df3
Successfully tagged localhost/my-image:functional-567858
7ad214c3df3b69707c3e165731324c106db122ec99930d5017c988b2ced5a525
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567858 image build -t localhost/my-image:functional-567858 testdata/build --alsologtostderr:
I0408 11:33:52.427968  385539 out.go:291] Setting OutFile to fd 1 ...
I0408 11:33:52.428494  385539 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:52.428550  385539 out.go:304] Setting ErrFile to fd 2...
I0408 11:33:52.428570  385539 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0408 11:33:52.429067  385539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
I0408 11:33:52.430398  385539 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:52.431080  385539 config.go:182] Loaded profile config "functional-567858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0408 11:33:52.431563  385539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:52.431622  385539 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:52.448151  385539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
I0408 11:33:52.448644  385539 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:52.449314  385539 main.go:141] libmachine: Using API Version  1
I0408 11:33:52.449342  385539 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:52.449782  385539 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:52.449997  385539 main.go:141] libmachine: (functional-567858) Calling .GetState
I0408 11:33:52.452056  385539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0408 11:33:52.452100  385539 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 11:33:52.467071  385539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35721
I0408 11:33:52.467532  385539 main.go:141] libmachine: () Calling .GetVersion
I0408 11:33:52.468083  385539 main.go:141] libmachine: Using API Version  1
I0408 11:33:52.468108  385539 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 11:33:52.468482  385539 main.go:141] libmachine: () Calling .GetMachineName
I0408 11:33:52.468675  385539 main.go:141] libmachine: (functional-567858) Calling .DriverName
I0408 11:33:52.468900  385539 ssh_runner.go:195] Run: systemctl --version
I0408 11:33:52.468931  385539 main.go:141] libmachine: (functional-567858) Calling .GetSSHHostname
I0408 11:33:52.471853  385539 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:52.472290  385539 main.go:141] libmachine: (functional-567858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:12:fa", ip: ""} in network mk-functional-567858: {Iface:virbr1 ExpiryTime:2024-04-08 12:30:50 +0000 UTC Type:0 Mac:52:54:00:fa:12:fa Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-567858 Clientid:01:52:54:00:fa:12:fa}
I0408 11:33:52.472328  385539 main.go:141] libmachine: (functional-567858) DBG | domain functional-567858 has defined IP address 192.168.39.170 and MAC address 52:54:00:fa:12:fa in network mk-functional-567858
I0408 11:33:52.472467  385539 main.go:141] libmachine: (functional-567858) Calling .GetSSHPort
I0408 11:33:52.472671  385539 main.go:141] libmachine: (functional-567858) Calling .GetSSHKeyPath
I0408 11:33:52.472840  385539 main.go:141] libmachine: (functional-567858) Calling .GetSSHUsername
I0408 11:33:52.473006  385539 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/functional-567858/id_rsa Username:docker}
I0408 11:33:52.555068  385539 build_images.go:161] Building image from path: /tmp/build.623239756.tar
I0408 11:33:52.555184  385539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0408 11:33:52.565652  385539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.623239756.tar
I0408 11:33:52.570337  385539 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.623239756.tar: stat -c "%s %y" /var/lib/minikube/build/build.623239756.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.623239756.tar': No such file or directory
I0408 11:33:52.570380  385539 ssh_runner.go:362] scp /tmp/build.623239756.tar --> /var/lib/minikube/build/build.623239756.tar (3072 bytes)
I0408 11:33:52.608800  385539 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.623239756
I0408 11:33:52.620220  385539 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.623239756 -xf /var/lib/minikube/build/build.623239756.tar
I0408 11:33:52.632448  385539 crio.go:315] Building image: /var/lib/minikube/build/build.623239756
I0408 11:33:52.632515  385539 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-567858 /var/lib/minikube/build/build.623239756 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0408 11:33:55.526413  385539 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-567858 /var/lib/minikube/build/build.623239756 --cgroup-manager=cgroupfs: (2.893870153s)
I0408 11:33:55.526491  385539 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.623239756
I0408 11:33:55.538843  385539 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.623239756.tar
I0408 11:33:55.550452  385539 build_images.go:217] Built localhost/my-image:functional-567858 from /tmp/build.623239756.tar
I0408 11:33:55.550497  385539 build_images.go:133] succeeded building to: functional-567858
I0408 11:33:55.550503  385539 build_images.go:134] failed building to: 
I0408 11:33:55.550539  385539 main.go:141] libmachine: Making call to close driver server
I0408 11:33:55.550556  385539 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:55.550915  385539 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:55.550934  385539 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:33:55.550942  385539 main.go:141] libmachine: Making call to close driver server
I0408 11:33:55.550950  385539 main.go:141] libmachine: (functional-567858) Calling .Close
I0408 11:33:55.551214  385539 main.go:141] libmachine: Successfully made call to close driver server
I0408 11:33:55.551230  385539 main.go:141] libmachine: Making call to close connection to plugin binary
I0408 11:33:55.551246  385539 main.go:141] libmachine: (functional-567858) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.939015264s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-567858
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 update-context --alsologtostderr -v=2
E0408 11:33:44.543110  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:44.549249  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:44.559525  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:44.579945  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:44.620360  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:44.700751  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:44.861243  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:45.181841  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:45.822656  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:47.103671  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:33:49.664370  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image load --daemon gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 image load --daemon gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr: (5.07318998s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image load --daemon gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 image load --daemon gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr: (2.608799138s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.90525892s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-567858
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image load --daemon gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 image load --daemon gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr: (4.334402169s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image save gcr.io/google-containers/addon-resizer:functional-567858 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
2024/04/08 11:33:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 image save gcr.io/google-containers/addon-resizer:functional-567858 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.953414732s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image rm gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (9.149561862s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-567858
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-567858 image save --daemon gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-567858 image save --daemon gcr.io/google-containers/addon-resizer:functional-567858 --alsologtostderr: (1.449446963s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-567858
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-567858
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-567858
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-567858
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (235.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-438604 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0408 11:34:05.025883  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:34:25.506100  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:35:06.466691  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:36:28.387299  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-438604 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m55.161238087s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (235.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-438604 -- rollout status deployment/busybox: (4.240362076s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-cdh5l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-gk5bx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-jz4h9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-cdh5l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-gk5bx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-jz4h9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-cdh5l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-gk5bx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-jz4h9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-cdh5l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-cdh5l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-gk5bx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-gk5bx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-jz4h9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-438604 -- exec busybox-7fdf7869d9-jz4h9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-438604 -v=7 --alsologtostderr
E0408 11:38:06.832109  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:06.837464  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:06.847793  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:06.868148  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:06.909323  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:06.989647  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:07.150141  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:07.470801  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:08.111845  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:09.392622  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:11.953211  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:17.074282  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:27.314511  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:38:44.542625  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:38:47.795114  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-438604 -v=7 --alsologtostderr: (48.209074162s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-438604 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp testdata/cp-test.txt ha-438604:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604:/home/docker/cp-test.txt ha-438604-m02:/home/docker/cp-test_ha-438604_ha-438604-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m02 "sudo cat /home/docker/cp-test_ha-438604_ha-438604-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604:/home/docker/cp-test.txt ha-438604-m03:/home/docker/cp-test_ha-438604_ha-438604-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m03 "sudo cat /home/docker/cp-test_ha-438604_ha-438604-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604:/home/docker/cp-test.txt ha-438604-m04:/home/docker/cp-test_ha-438604_ha-438604-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m04 "sudo cat /home/docker/cp-test_ha-438604_ha-438604-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp testdata/cp-test.txt ha-438604-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m02:/home/docker/cp-test.txt ha-438604:/home/docker/cp-test_ha-438604-m02_ha-438604.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604 "sudo cat /home/docker/cp-test_ha-438604-m02_ha-438604.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m02:/home/docker/cp-test.txt ha-438604-m03:/home/docker/cp-test_ha-438604-m02_ha-438604-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m03 "sudo cat /home/docker/cp-test_ha-438604-m02_ha-438604-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m02:/home/docker/cp-test.txt ha-438604-m04:/home/docker/cp-test_ha-438604-m02_ha-438604-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m04 "sudo cat /home/docker/cp-test_ha-438604-m02_ha-438604-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp testdata/cp-test.txt ha-438604-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt ha-438604:/home/docker/cp-test_ha-438604-m03_ha-438604.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604 "sudo cat /home/docker/cp-test_ha-438604-m03_ha-438604.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt ha-438604-m02:/home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m02 "sudo cat /home/docker/cp-test_ha-438604-m03_ha-438604-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m03:/home/docker/cp-test.txt ha-438604-m04:/home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m04 "sudo cat /home/docker/cp-test_ha-438604-m03_ha-438604-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp testdata/cp-test.txt ha-438604-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2055325835/001/cp-test_ha-438604-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt ha-438604:/home/docker/cp-test_ha-438604-m04_ha-438604.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604 "sudo cat /home/docker/cp-test_ha-438604-m04_ha-438604.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt ha-438604-m02:/home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m02 "sudo cat /home/docker/cp-test_ha-438604-m04_ha-438604-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 cp ha-438604-m04:/home/docker/cp-test.txt ha-438604-m03:/home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 ssh -n ha-438604-m03 "sudo cat /home/docker/cp-test_ha-438604-m04_ha-438604-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.539502598s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 node delete m03 -v=7 --alsologtostderr
E0408 11:48:44.543246  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-438604 node delete m03 -v=7 --alsologtostderr: (16.629053368s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (365.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-438604 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0408 11:53:06.832579  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 11:53:44.544065  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 11:54:29.877866  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-438604 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m4.758093613s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (365.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-438604 --control-plane -v=7 --alsologtostderr
E0408 11:58:06.832411  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-438604 --control-plane -v=7 --alsologtostderr: (1m17.287930169s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-438604 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.58s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-581052 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-581052 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.777216365s)
--- PASS: TestJSONOutput/start/Command (95.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-581052 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-581052 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-581052 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-581052 --output=json --user=testUser: (7.423882302s)
--- PASS: TestJSONOutput/stop/Command (7.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-047194 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-047194 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.484909ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3e65bea1-8abe-4e74-b2f4-d7acabdde4d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-047194] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"78e8da98-b861-422e-98fe-d62d5c096686","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18588"}}
	{"specversion":"1.0","id":"92ab106d-eaf4-45d5-a4c6-332cddee74a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ae68a099-a994-4889-a64e-489035b47ff4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig"}}
	{"specversion":"1.0","id":"ff8e5582-3e8a-4c15-8920-82390f7e39b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube"}}
	{"specversion":"1.0","id":"1b57738d-86f9-45b2-a0cc-49dd17a7f025","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"41d2f2e7-c73d-4834-8711-dda73cd6dd4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c8192a25-79ae-4c2e-8d17-afda41ba43a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-047194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-047194
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (89.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-626689 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-626689 --driver=kvm2  --container-runtime=crio: (44.7917693s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-629699 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-629699 --driver=kvm2  --container-runtime=crio: (42.096809265s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-626689
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-629699
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-629699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-629699
helpers_test.go:175: Cleaning up "first-626689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-626689
--- PASS: TestMinikubeProfile (89.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-712817 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-712817 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.379649837s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-712817 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-712817 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-730722 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-730722 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.330659006s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-730722 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-730722 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.12s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-712817 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-712817 --alsologtostderr -v=5: (1.116148515s)
--- PASS: TestMountStart/serial/DeleteFirst (1.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-730722 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-730722 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-730722
E0408 12:03:06.832311  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-730722: (1.380263426s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-730722
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-730722: (21.570100923s)
--- PASS: TestMountStart/serial/RestartStopped (22.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-730722 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-730722 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (101.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-830937 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0408 12:03:44.543931  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-830937 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m40.610293999s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (101.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-830937 -- rollout status deployment/busybox: (3.725036513s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-522p8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-jn6pk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-522p8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-jn6pk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-522p8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-jn6pk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-522p8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-522p8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-jn6pk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-830937 -- exec busybox-7fdf7869d9-jn6pk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-830937 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-830937 -v 3 --alsologtostderr: (43.416981246s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-830937 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp testdata/cp-test.txt multinode-830937:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1863887303/001/cp-test_multinode-830937.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937:/home/docker/cp-test.txt multinode-830937-m02:/home/docker/cp-test_multinode-830937_multinode-830937-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m02 "sudo cat /home/docker/cp-test_multinode-830937_multinode-830937-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937:/home/docker/cp-test.txt multinode-830937-m03:/home/docker/cp-test_multinode-830937_multinode-830937-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m03 "sudo cat /home/docker/cp-test_multinode-830937_multinode-830937-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp testdata/cp-test.txt multinode-830937-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1863887303/001/cp-test_multinode-830937-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937-m02:/home/docker/cp-test.txt multinode-830937:/home/docker/cp-test_multinode-830937-m02_multinode-830937.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937 "sudo cat /home/docker/cp-test_multinode-830937-m02_multinode-830937.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937-m02:/home/docker/cp-test.txt multinode-830937-m03:/home/docker/cp-test_multinode-830937-m02_multinode-830937-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m03 "sudo cat /home/docker/cp-test_multinode-830937-m02_multinode-830937-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp testdata/cp-test.txt multinode-830937-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1863887303/001/cp-test_multinode-830937-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt multinode-830937:/home/docker/cp-test_multinode-830937-m03_multinode-830937.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937 "sudo cat /home/docker/cp-test_multinode-830937-m03_multinode-830937.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 cp multinode-830937-m03:/home/docker/cp-test.txt multinode-830937-m02:/home/docker/cp-test_multinode-830937-m03_multinode-830937-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 ssh -n multinode-830937-m02 "sudo cat /home/docker/cp-test_multinode-830937-m03_multinode-830937-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-830937 node stop m03: (1.605634869s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-830937 status: exit status 7 (452.892752ms)

                                                
                                                
-- stdout --
	multinode-830937
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-830937-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-830937-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-830937 status --alsologtostderr: exit status 7 (459.89314ms)

                                                
                                                
-- stdout --
	multinode-830937
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-830937-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-830937-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:06:13.870525  402889 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:06:13.870698  402889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:06:13.870712  402889 out.go:304] Setting ErrFile to fd 2...
	I0408 12:06:13.870718  402889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:06:13.870968  402889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:06:13.871452  402889 out.go:298] Setting JSON to false
	I0408 12:06:13.871488  402889 mustload.go:65] Loading cluster: multinode-830937
	I0408 12:06:13.872322  402889 notify.go:220] Checking for updates...
	I0408 12:06:13.872788  402889 config.go:182] Loaded profile config "multinode-830937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0408 12:06:13.872816  402889 status.go:255] checking status of multinode-830937 ...
	I0408 12:06:13.873470  402889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:06:13.873532  402889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:06:13.895839  402889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44063
	I0408 12:06:13.896366  402889 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:06:13.896985  402889 main.go:141] libmachine: Using API Version  1
	I0408 12:06:13.897026  402889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:06:13.897371  402889 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:06:13.897602  402889 main.go:141] libmachine: (multinode-830937) Calling .GetState
	I0408 12:06:13.899218  402889 status.go:330] multinode-830937 host status = "Running" (err=<nil>)
	I0408 12:06:13.899233  402889 host.go:66] Checking if "multinode-830937" exists ...
	I0408 12:06:13.899500  402889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:06:13.899542  402889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:06:13.914860  402889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0408 12:06:13.915370  402889 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:06:13.915973  402889 main.go:141] libmachine: Using API Version  1
	I0408 12:06:13.916000  402889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:06:13.916367  402889 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:06:13.916655  402889 main.go:141] libmachine: (multinode-830937) Calling .GetIP
	I0408 12:06:13.919839  402889 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:06:13.920265  402889 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:06:13.920299  402889 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:06:13.920465  402889 host.go:66] Checking if "multinode-830937" exists ...
	I0408 12:06:13.920780  402889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:06:13.920828  402889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:06:13.936212  402889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0408 12:06:13.936690  402889 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:06:13.937163  402889 main.go:141] libmachine: Using API Version  1
	I0408 12:06:13.937184  402889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:06:13.937498  402889 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:06:13.937723  402889 main.go:141] libmachine: (multinode-830937) Calling .DriverName
	I0408 12:06:13.937930  402889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 12:06:13.937964  402889 main.go:141] libmachine: (multinode-830937) Calling .GetSSHHostname
	I0408 12:06:13.940784  402889 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:06:13.941274  402889 main.go:141] libmachine: (multinode-830937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:56:d1", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:03:47 +0000 UTC Type:0 Mac:52:54:00:52:56:d1 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:multinode-830937 Clientid:01:52:54:00:52:56:d1}
	I0408 12:06:13.941307  402889 main.go:141] libmachine: (multinode-830937) DBG | domain multinode-830937 has defined IP address 192.168.39.209 and MAC address 52:54:00:52:56:d1 in network mk-multinode-830937
	I0408 12:06:13.941498  402889 main.go:141] libmachine: (multinode-830937) Calling .GetSSHPort
	I0408 12:06:13.941690  402889 main.go:141] libmachine: (multinode-830937) Calling .GetSSHKeyPath
	I0408 12:06:13.941853  402889 main.go:141] libmachine: (multinode-830937) Calling .GetSSHUsername
	I0408 12:06:13.942022  402889 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937/id_rsa Username:docker}
	I0408 12:06:14.027878  402889 ssh_runner.go:195] Run: systemctl --version
	I0408 12:06:14.034396  402889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:06:14.052009  402889 kubeconfig.go:125] found "multinode-830937" server: "https://192.168.39.209:8443"
	I0408 12:06:14.052046  402889 api_server.go:166] Checking apiserver status ...
	I0408 12:06:14.052080  402889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 12:06:14.067097  402889 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup
	W0408 12:06:14.078132  402889 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0408 12:06:14.078213  402889 ssh_runner.go:195] Run: ls
	I0408 12:06:14.082819  402889 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0408 12:06:14.087440  402889 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0408 12:06:14.087466  402889 status.go:422] multinode-830937 apiserver status = Running (err=<nil>)
	I0408 12:06:14.087476  402889 status.go:257] multinode-830937 status: &{Name:multinode-830937 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0408 12:06:14.087494  402889 status.go:255] checking status of multinode-830937-m02 ...
	I0408 12:06:14.087837  402889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:06:14.087876  402889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:06:14.103536  402889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0408 12:06:14.104099  402889 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:06:14.104583  402889 main.go:141] libmachine: Using API Version  1
	I0408 12:06:14.104608  402889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:06:14.104923  402889 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:06:14.105070  402889 main.go:141] libmachine: (multinode-830937-m02) Calling .GetState
	I0408 12:06:14.106362  402889 status.go:330] multinode-830937-m02 host status = "Running" (err=<nil>)
	I0408 12:06:14.106378  402889 host.go:66] Checking if "multinode-830937-m02" exists ...
	I0408 12:06:14.106701  402889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:06:14.106752  402889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:06:14.122278  402889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0408 12:06:14.122822  402889 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:06:14.123338  402889 main.go:141] libmachine: Using API Version  1
	I0408 12:06:14.123353  402889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:06:14.123654  402889 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:06:14.123833  402889 main.go:141] libmachine: (multinode-830937-m02) Calling .GetIP
	I0408 12:06:14.126439  402889 main.go:141] libmachine: (multinode-830937-m02) DBG | domain multinode-830937-m02 has defined MAC address 52:54:00:02:67:2d in network mk-multinode-830937
	I0408 12:06:14.126820  402889 main.go:141] libmachine: (multinode-830937-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:67:2d", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:04:47 +0000 UTC Type:0 Mac:52:54:00:02:67:2d Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-830937-m02 Clientid:01:52:54:00:02:67:2d}
	I0408 12:06:14.126843  402889 main.go:141] libmachine: (multinode-830937-m02) DBG | domain multinode-830937-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:02:67:2d in network mk-multinode-830937
	I0408 12:06:14.126998  402889 host.go:66] Checking if "multinode-830937-m02" exists ...
	I0408 12:06:14.127307  402889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:06:14.127347  402889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:06:14.142926  402889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40263
	I0408 12:06:14.143408  402889 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:06:14.143959  402889 main.go:141] libmachine: Using API Version  1
	I0408 12:06:14.143984  402889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:06:14.144270  402889 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:06:14.144474  402889 main.go:141] libmachine: (multinode-830937-m02) Calling .DriverName
	I0408 12:06:14.144674  402889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0408 12:06:14.144695  402889 main.go:141] libmachine: (multinode-830937-m02) Calling .GetSSHHostname
	I0408 12:06:14.147612  402889 main.go:141] libmachine: (multinode-830937-m02) DBG | domain multinode-830937-m02 has defined MAC address 52:54:00:02:67:2d in network mk-multinode-830937
	I0408 12:06:14.148076  402889 main.go:141] libmachine: (multinode-830937-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:67:2d", ip: ""} in network mk-multinode-830937: {Iface:virbr1 ExpiryTime:2024-04-08 13:04:47 +0000 UTC Type:0 Mac:52:54:00:02:67:2d Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:multinode-830937-m02 Clientid:01:52:54:00:02:67:2d}
	I0408 12:06:14.148108  402889 main.go:141] libmachine: (multinode-830937-m02) DBG | domain multinode-830937-m02 has defined IP address 192.168.39.128 and MAC address 52:54:00:02:67:2d in network mk-multinode-830937
	I0408 12:06:14.148266  402889 main.go:141] libmachine: (multinode-830937-m02) Calling .GetSSHPort
	I0408 12:06:14.148446  402889 main.go:141] libmachine: (multinode-830937-m02) Calling .GetSSHKeyPath
	I0408 12:06:14.148636  402889 main.go:141] libmachine: (multinode-830937-m02) Calling .GetSSHUsername
	I0408 12:06:14.148796  402889 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18588-368424/.minikube/machines/multinode-830937-m02/id_rsa Username:docker}
	I0408 12:06:14.235422  402889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 12:06:14.250957  402889 status.go:257] multinode-830937-m02 status: &{Name:multinode-830937-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0408 12:06:14.251004  402889 status.go:255] checking status of multinode-830937-m03 ...
	I0408 12:06:14.251469  402889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0408 12:06:14.251514  402889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0408 12:06:14.267988  402889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I0408 12:06:14.268571  402889 main.go:141] libmachine: () Calling .GetVersion
	I0408 12:06:14.269064  402889 main.go:141] libmachine: Using API Version  1
	I0408 12:06:14.269084  402889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0408 12:06:14.269388  402889 main.go:141] libmachine: () Calling .GetMachineName
	I0408 12:06:14.269601  402889 main.go:141] libmachine: (multinode-830937-m03) Calling .GetState
	I0408 12:06:14.271382  402889 status.go:330] multinode-830937-m03 host status = "Stopped" (err=<nil>)
	I0408 12:06:14.271397  402889 status.go:343] host is not running, skipping remaining checks
	I0408 12:06:14.271403  402889 status.go:257] multinode-830937-m03 status: &{Name:multinode-830937-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.52s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-830937 node start m03 -v=7 --alsologtostderr: (29.928104569s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-830937 node delete m03: (1.870107313s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (169.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-830937 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-830937 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m48.931098471s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-830937 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (169.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-830937
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-830937-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-830937-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.527432ms)

                                                
                                                
-- stdout --
	* [multinode-830937-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-830937-m02' is duplicated with machine name 'multinode-830937-m02' in profile 'multinode-830937'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-830937-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-830937-m03 --driver=kvm2  --container-runtime=crio: (45.468383158s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-830937
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-830937: exit status 80 (246.060717ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-830937 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-830937-m03 already exists in multinode-830937-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-830937-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-830937-m03: (1.055377425s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.91s)

                                                
                                    
x
+
TestScheduledStopUnix (116.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-959828 --memory=2048 --driver=kvm2  --container-runtime=crio
E0408 12:23:27.589758  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 12:23:44.544669  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-959828 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.356006957s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959828 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-959828 -n scheduled-stop-959828
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959828 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959828 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959828 -n scheduled-stop-959828
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-959828
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959828 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-959828
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-959828: exit status 7 (87.974715ms)

                                                
                                                
-- stdout --
	scheduled-stop-959828
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959828 -n scheduled-stop-959828
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959828 -n scheduled-stop-959828: exit status 7 (81.431988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-959828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-959828
--- PASS: TestScheduledStopUnix (116.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (243.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1725055821 start -p running-upgrade-115055 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1725055821 start -p running-upgrade-115055 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.79429018s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-115055 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0408 12:27:49.879490  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-115055 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m50.82979883s)
helpers_test.go:175: Cleaning up "running-upgrade-115055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-115055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-115055: (1.200122684s)
--- PASS: TestRunningBinaryUpgrade (243.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-105795 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-105795 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (92.814126ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-105795] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-105795 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-105795 --driver=kvm2  --container-runtime=crio: (1m38.668359135s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-105795 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-105795 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-105795 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.08873347s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-105795 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-105795 status -o json: exit status 2 (276.833134ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-105795","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-105795
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-105795: (1.020347701s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-583253 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-583253 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (127.761899ms)

                                                
                                                
-- stdout --
	* [false-583253] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18588
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 12:26:56.198657  412219 out.go:291] Setting OutFile to fd 1 ...
	I0408 12:26:56.199076  412219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:26:56.199092  412219 out.go:304] Setting ErrFile to fd 2...
	I0408 12:26:56.199098  412219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0408 12:26:56.199804  412219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18588-368424/.minikube/bin
	I0408 12:26:56.201060  412219 out.go:298] Setting JSON to false
	I0408 12:26:56.202694  412219 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7759,"bootTime":1712571457,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0408 12:26:56.202814  412219 start.go:139] virtualization: kvm guest
	I0408 12:26:56.205163  412219 out.go:177] * [false-583253] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0408 12:26:56.207355  412219 out.go:177]   - MINIKUBE_LOCATION=18588
	I0408 12:26:56.208779  412219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 12:26:56.207452  412219 notify.go:220] Checking for updates...
	I0408 12:26:56.211636  412219 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18588-368424/kubeconfig
	I0408 12:26:56.213208  412219 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18588-368424/.minikube
	I0408 12:26:56.214845  412219 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0408 12:26:56.216483  412219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 12:26:56.218278  412219 config.go:182] Loaded profile config "NoKubernetes-105795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0408 12:26:56.218411  412219 config.go:182] Loaded profile config "kubernetes-upgrade-144569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0408 12:26:56.218512  412219 config.go:182] Loaded profile config "running-upgrade-115055": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0408 12:26:56.218637  412219 driver.go:392] Setting default libvirt URI to qemu:///system
	I0408 12:26:56.256568  412219 out.go:177] * Using the kvm2 driver based on user configuration
	I0408 12:26:56.257911  412219 start.go:297] selected driver: kvm2
	I0408 12:26:56.257943  412219 start.go:901] validating driver "kvm2" against <nil>
	I0408 12:26:56.257960  412219 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 12:26:56.260284  412219 out.go:177] 
	W0408 12:26:56.261796  412219 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0408 12:26:56.263319  412219 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-583253 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-583253" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.61.6:8443
name: NoKubernetes-105795
contexts:
- context:
cluster: NoKubernetes-105795
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: NoKubernetes-105795
name: NoKubernetes-105795
current-context: NoKubernetes-105795
kind: Config
preferences: {}
users:
- name: NoKubernetes-105795
user:
client-certificate: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/NoKubernetes-105795/client.crt
client-key: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/NoKubernetes-105795/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-583253

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-583253"

                                                
                                                
----------------------- debugLogs end: false-583253 [took: 3.501408317s] --------------------------------
helpers_test.go:175: Cleaning up "false-583253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-583253
--- PASS: TestNetworkPlugins/group/false (3.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-105795 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-105795 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.592929071s)
--- PASS: TestNoKubernetes/serial/Start (44.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-105795 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-105795 "sudo systemctl is-active --quiet service kubelet": exit status 1 (237.022101ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.179483955s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-105795
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-105795: (1.601324933s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-105795 --driver=kvm2  --container-runtime=crio
E0408 12:28:06.832579  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
E0408 12:28:44.542875  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-105795 --driver=kvm2  --container-runtime=crio: (42.601190269s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-105795 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-105795 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.068452ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (99.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-778946 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-778946 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m39.83167545s)
--- PASS: TestPause/serial/Start (99.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4011812925 start -p stopped-upgrade-660392 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4011812925 start -p stopped-upgrade-660392 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (56.375811187s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4011812925 -p stopped-upgrade-660392 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4011812925 -p stopped-upgrade-660392 stop: (2.459893921s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-660392 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-660392 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.360475286s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-778946 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-778946 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.695639624s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (50.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-660392
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m41.765156592s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-778946 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-778946 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-778946 --output=json --layout=cluster: exit status 2 (288.29584ms)

                                                
                                                
-- stdout --
	{"Name":"pause-778946","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-778946","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.337411019s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-778946 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-778946 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-778946 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-778946 --alsologtostderr -v=5: (1.033926691s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (122.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0408 12:33:06.832679  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/functional-567858/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m2.954926037s)
--- PASS: TestNetworkPlugins/group/calico/Start (122.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sctxn" [7d3ec500-4cd7-4320-9f32-8bbacaa94fd8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006169261s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-583253 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-583253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t48hz" [669f1179-82d2-4bba-9e98-7d2f48137239] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t48hz" [669f1179-82d2-4bba-9e98-7d2f48137239] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004877343s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-583253 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-583253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5gx5g" [606eeb6f-0d77-4174-8c95-5ab96347a864] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5gx5g" [606eeb6f-0d77-4174-8c95-5ab96347a864] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004067576s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-583253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-583253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.063088796s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (129.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m9.721867278s)
--- PASS: TestNetworkPlugins/group/bridge/Start (129.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8gtdj" [0f09d31b-1fd5-4287-85e4-4a6660a66038] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005433377s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-583253 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-583253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nhnqx" [4f632983-90b1-43ed-bddd-c7adc62118ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nhnqx" [4f632983-90b1-43ed-bddd-c7adc62118ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005459644s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-583253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (93.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m33.888468478s)
--- PASS: TestNetworkPlugins/group/flannel/Start (93.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-583253 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-583253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fhk2g" [1091b379-d4be-4018-b842-279358981497] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fhk2g" [1091b379-d4be-4018-b842-279358981497] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.006473527s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-583253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-583253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m40.967104065s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-583253 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-583253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hrj9x" [41af5d52-f306-45e4-8ae2-10b5b5e8e30a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hrj9x" [41af5d52-f306-45e4-8ae2-10b5b5e8e30a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005237577s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-583253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-khpcg" [70c12014-7f58-4f94-9a59-e266494a4bd6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.012668748s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-583253 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-583253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lllmb" [51ce81c4-4103-4f09-adf7-1b333fc54a63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lllmb" [51ce81c4-4103-4f09-adf7-1b333fc54a63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.004916854s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-583253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (141.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-135234 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-135234 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (2m21.127581109s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (141.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-583253 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-583253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-crj95" [d8fc9c88-64e0-4143-8297-578dbf318fb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-crj95" [d8fc9c88-64e0-4143-8297-578dbf318fb5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004661027s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-488947 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-488947 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m41.05129822s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-583253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-583253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)
E0408 13:07:41.604143  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/enable-default-cni-583253/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-527454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0408 12:38:22.066372  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:22.071779  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:22.082110  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:22.102460  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:22.142825  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:22.223315  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:22.383949  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:22.704764  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:23.345262  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:24.625964  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:27.187153  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:31.891809  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:31.897141  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:31.907510  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:31.927866  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:31.968227  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:32.048601  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:32.209197  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:32.307815  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:32.530166  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:33.171197  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:34.451648  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:37.012579  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:42.133666  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:38:42.548272  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:38:44.543145  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/addons-825010/client.crt: no such file or directory
E0408 12:38:52.374752  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:39:03.029393  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/kindnet-583253/client.crt: no such file or directory
E0408 12:39:12.855923  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
E0408 12:39:13.957956  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:13.963250  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:13.973568  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:13.993949  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:14.034527  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:14.114957  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:14.275972  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:14.596605  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:15.237684  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
E0408 12:39:16.518799  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-527454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m9.670215079s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-527454 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c77b2e35-4ace-435b-90b8-6cef7cd91116] Pending
E0408 12:39:19.079267  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c77b2e35-4ace-435b-90b8-6cef7cd91116] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c77b2e35-4ace-435b-90b8-6cef7cd91116] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00380934s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-527454 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-135234 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e34c664b-3926-4ddf-98b9-7bb599eee6ca] Pending
E0408 12:39:24.200100  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/calico-583253/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e34c664b-3926-4ddf-98b9-7bb599eee6ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e34c664b-3926-4ddf-98b9-7bb599eee6ca] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00473245s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-135234 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-527454 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-527454 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-488947 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ae07177b-6d33-4635-a69f-6236f76cba1f] Pending
helpers_test.go:344: "busybox" [ae07177b-6d33-4635-a69f-6236f76cba1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ae07177b-6d33-4635-a69f-6236f76cba1f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.006010181s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-488947 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-135234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-135234 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-488947 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-488947 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.02613575s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-488947 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (682.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-527454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-527454 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (11m22.343666443s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527454 -n default-k8s-diff-port-527454
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (682.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (588.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-135234 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-135234 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (9m47.90162674s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-135234 -n no-preload-135234
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (588.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (633.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-488947 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-488947 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (10m32.885790329s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-488947 -n embed-certs-488947
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (633.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-384148 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-384148 --alsologtostderr -v=3: (4.309108974s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-384148 -n old-k8s-version-384148: exit status 7 (77.921294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-384148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-337169 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-337169 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (57.472654506s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-337169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-337169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.227247975s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-337169 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-337169 --alsologtostderr -v=3: (7.489859024s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-337169 -n newest-cni-337169
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-337169 -n newest-cni-337169: exit status 7 (99.095738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-337169 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-337169 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-337169 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (38.773971407s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-337169 -n newest-cni-337169
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-337169 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-337169 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-337169 -n newest-cni-337169
E0408 13:08:31.892570  375817 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/auto-583253/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-337169 -n newest-cni-337169: exit status 2 (261.168449ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-337169 -n newest-cni-337169
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-337169 -n newest-cni-337169: exit status 2 (255.814292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-337169 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-337169 -n newest-cni-337169
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-337169 -n newest-cni-337169
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    

Test skip (39/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.0/binaries 0
25 TestDownloadOnly/v1.30.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
264 TestNetworkPlugins/group/kubenet 4.21
272 TestNetworkPlugins/group/cilium 5.63
284 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-583253 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-583253" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.61.6:8443
name: NoKubernetes-105795
contexts:
- context:
cluster: NoKubernetes-105795
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: NoKubernetes-105795
name: NoKubernetes-105795
current-context: NoKubernetes-105795
kind: Config
preferences: {}
users:
- name: NoKubernetes-105795
user:
client-certificate: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/NoKubernetes-105795/client.crt
client-key: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/NoKubernetes-105795/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-583253

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-583253"

                                                
                                                
----------------------- debugLogs end: kubenet-583253 [took: 4.03574622s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-583253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-583253
--- SKIP: TestNetworkPlugins/group/kubenet (4.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-583253 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-583253" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18588-368424/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.61.6:8443
name: NoKubernetes-105795
contexts:
- context:
cluster: NoKubernetes-105795
extensions:
- extension:
last-update: Mon, 08 Apr 2024 12:26:45 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: NoKubernetes-105795
name: NoKubernetes-105795
current-context: NoKubernetes-105795
kind: Config
preferences: {}
users:
- name: NoKubernetes-105795
user:
client-certificate: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/NoKubernetes-105795/client.crt
client-key: /home/jenkins/minikube-integration/18588-368424/.minikube/profiles/NoKubernetes-105795/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-583253

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-583253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-583253"

                                                
                                                
----------------------- debugLogs end: cilium-583253 [took: 5.474154675s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-583253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-583253
--- SKIP: TestNetworkPlugins/group/cilium (5.63s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-122490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-122490
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard